00:00:00.001 Started by upstream project "autotest-per-patch" build number 132351 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.117 Using shallow fetch with depth 1 00:00:00.117 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.117 > git --version # timeout=10 00:00:00.142 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.162 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.162 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.535 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.548 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.561 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.561 > git config core.sparsecheckout # timeout=10 00:00:06.571 > git read-tree -mu HEAD # timeout=10 00:00:06.587 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.615 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.615 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.733 [Pipeline] Start of Pipeline 00:00:06.748 [Pipeline] library 00:00:06.750 Loading library shm_lib@master 00:00:06.750 Library shm_lib@master is cached. Copying from home. 00:00:06.765 [Pipeline] node 00:00:21.767 Still waiting to schedule task 00:00:21.767 Waiting for next available executor on ‘vagrant-vm-host’ 00:13:58.793 Running on VM-host-SM16 in /var/jenkins/workspace/raid-vg-autotest 00:13:58.803 [Pipeline] { 00:13:58.842 [Pipeline] catchError 00:13:58.844 [Pipeline] { 00:13:58.852 [Pipeline] wrap 00:13:58.856 [Pipeline] { 00:13:58.862 [Pipeline] stage 00:13:58.864 [Pipeline] { (Prologue) 00:13:58.875 [Pipeline] echo 00:13:58.876 Node: VM-host-SM16 00:13:58.881 [Pipeline] cleanWs 00:13:58.894 [WS-CLEANUP] Deleting project workspace... 00:13:58.894 [WS-CLEANUP] Deferred wipeout is used... 00:13:58.899 [WS-CLEANUP] done 00:13:59.206 [Pipeline] setCustomBuildProperty 00:13:59.301 [Pipeline] httpRequest 00:13:59.607 [Pipeline] echo 00:13:59.609 Sorcerer 10.211.164.20 is alive 00:13:59.619 [Pipeline] retry 00:13:59.622 [Pipeline] { 00:13:59.641 [Pipeline] httpRequest 00:13:59.646 HttpMethod: GET 00:13:59.647 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:59.647 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:59.648 Response Code: HTTP/1.1 200 OK 00:13:59.649 Success: Status code 200 is in the accepted range: 200,404 00:13:59.649 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:59.794 [Pipeline] } 00:13:59.811 [Pipeline] // retry 00:13:59.818 [Pipeline] sh 00:14:00.095 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:00.110 [Pipeline] httpRequest 00:14:00.425 [Pipeline] echo 00:14:00.427 Sorcerer 10.211.164.20 is alive 00:14:00.438 [Pipeline] retry 00:14:00.440 [Pipeline] { 00:14:00.456 [Pipeline] httpRequest 00:14:00.461 HttpMethod: GET 00:14:00.461 URL: http://10.211.164.20/packages/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:00.462 Sending request to url: http://10.211.164.20/packages/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:00.462 Response Code: HTTP/1.1 200 OK 00:14:00.463 Success: Status code 200 is in the accepted range: 200,404 00:14:00.463 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:02.729 [Pipeline] } 00:14:02.745 [Pipeline] // retry 00:14:02.752 [Pipeline] sh 00:14:03.027 + tar --no-same-owner -xf spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:06.338 [Pipeline] sh 00:14:06.612 + git -C spdk log --oneline -n5 00:14:06.612 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:14:06.612 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:14:06.612 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:14:06.612 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:14:06.612 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:14:06.631 [Pipeline] writeFile 00:14:06.647 [Pipeline] sh 00:14:06.926 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:06.937 [Pipeline] sh 00:14:07.213 + cat autorun-spdk.conf 00:14:07.214 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:07.214 SPDK_RUN_ASAN=1 00:14:07.214 SPDK_RUN_UBSAN=1 00:14:07.214 SPDK_TEST_RAID=1 00:14:07.214 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:07.220 RUN_NIGHTLY=0 00:14:07.222 [Pipeline] } 00:14:07.235 [Pipeline] // stage 00:14:07.250 [Pipeline] stage 00:14:07.252 [Pipeline] { (Run VM) 00:14:07.263 [Pipeline] sh 00:14:07.540 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:07.540 + echo 'Start stage prepare_nvme.sh' 00:14:07.540 Start stage prepare_nvme.sh 00:14:07.540 + [[ -n 4 ]] 00:14:07.540 + disk_prefix=ex4 00:14:07.540 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:14:07.540 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:14:07.540 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:14:07.540 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:07.540 ++ SPDK_RUN_ASAN=1 00:14:07.540 ++ SPDK_RUN_UBSAN=1 00:14:07.540 ++ SPDK_TEST_RAID=1 00:14:07.540 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:07.540 ++ RUN_NIGHTLY=0 00:14:07.540 + cd /var/jenkins/workspace/raid-vg-autotest 00:14:07.540 + nvme_files=() 00:14:07.540 + declare -A nvme_files 00:14:07.540 + backend_dir=/var/lib/libvirt/images/backends 00:14:07.540 + nvme_files['nvme.img']=5G 00:14:07.540 + nvme_files['nvme-cmb.img']=5G 00:14:07.540 + nvme_files['nvme-multi0.img']=4G 00:14:07.540 + nvme_files['nvme-multi1.img']=4G 00:14:07.540 + nvme_files['nvme-multi2.img']=4G 00:14:07.540 + nvme_files['nvme-openstack.img']=8G 00:14:07.540 + nvme_files['nvme-zns.img']=5G 00:14:07.540 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:07.540 + (( SPDK_TEST_FTL == 1 )) 00:14:07.540 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:07.540 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:07.540 + for nvme in "${!nvme_files[@]}" 00:14:07.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:14:07.540 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:07.540 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:14:07.540 + echo 'End stage prepare_nvme.sh' 00:14:07.540 End stage prepare_nvme.sh 00:14:07.552 [Pipeline] sh 00:14:07.832 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:07.832 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:14:07.832 00:14:07.832 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:14:07.832 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:14:07.832 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:14:07.832 HELP=0 00:14:07.832 DRY_RUN=0 00:14:07.832 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:14:07.832 NVME_DISKS_TYPE=nvme,nvme, 00:14:07.832 NVME_AUTO_CREATE=0 00:14:07.832 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:14:07.832 NVME_CMB=,, 00:14:07.832 NVME_PMR=,, 00:14:07.832 NVME_ZNS=,, 00:14:07.832 NVME_MS=,, 00:14:07.832 NVME_FDP=,, 00:14:07.832 SPDK_VAGRANT_DISTRO=fedora39 00:14:07.832 SPDK_VAGRANT_VMCPU=10 00:14:07.832 SPDK_VAGRANT_VMRAM=12288 00:14:07.832 SPDK_VAGRANT_PROVIDER=libvirt 00:14:07.832 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:14:07.832 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:07.832 SPDK_OPENSTACK_NETWORK=0 00:14:07.832 VAGRANT_PACKAGE_BOX=0 00:14:07.832 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:14:07.832 FORCE_DISTRO=true 00:14:07.832 VAGRANT_BOX_VERSION= 00:14:07.832 EXTRA_VAGRANTFILES= 00:14:07.832 NIC_MODEL=e1000 00:14:07.832 00:14:07.832 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:14:07.832 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:14:11.115 Bringing machine 'default' up with 'libvirt' provider... 00:14:11.682 ==> default: Creating image (snapshot of base box volume). 00:14:11.682 ==> default: Creating domain with the following settings... 00:14:11.682 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732086695_43a41ec7117adb483a3e 00:14:11.682 ==> default: -- Domain type: kvm 00:14:11.682 ==> default: -- Cpus: 10 00:14:11.682 ==> default: -- Feature: acpi 00:14:11.682 ==> default: -- Feature: apic 00:14:11.682 ==> default: -- Feature: pae 00:14:11.682 ==> default: -- Memory: 12288M 00:14:11.682 ==> default: -- Memory Backing: hugepages: 00:14:11.682 ==> default: -- Management MAC: 00:14:11.682 ==> default: -- Loader: 00:14:11.682 ==> default: -- Nvram: 00:14:11.682 ==> default: -- Base box: spdk/fedora39 00:14:11.682 ==> default: -- Storage pool: default 00:14:11.682 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732086695_43a41ec7117adb483a3e.img (20G) 00:14:11.682 ==> default: -- Volume Cache: default 00:14:11.682 ==> default: -- Kernel: 00:14:11.682 ==> default: -- Initrd: 00:14:11.682 ==> default: -- Graphics Type: vnc 00:14:11.682 ==> default: -- Graphics Port: -1 00:14:11.682 ==> default: -- Graphics IP: 127.0.0.1 00:14:11.682 ==> default: -- Graphics Password: Not defined 00:14:11.682 ==> default: -- Video Type: cirrus 00:14:11.682 ==> default: -- Video VRAM: 9216 00:14:11.682 ==> default: -- Sound Type: 00:14:11.682 ==> default: -- Keymap: en-us 00:14:11.682 ==> default: -- TPM Path: 00:14:11.682 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:11.682 ==> default: -- Command line args: 00:14:11.682 ==> default: -> value=-device, 00:14:11.682 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:11.682 ==> default: -> value=-drive, 00:14:11.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:14:11.682 ==> default: -> value=-device, 00:14:11.682 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:11.682 ==> default: -> value=-device, 00:14:11.682 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:11.682 ==> default: -> value=-drive, 00:14:11.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:14:11.682 ==> default: -> value=-device, 00:14:11.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:11.682 ==> default: -> value=-drive, 00:14:11.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:14:11.682 ==> default: -> value=-device, 00:14:11.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:11.682 ==> default: -> value=-drive, 00:14:11.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:14:11.682 ==> default: -> value=-device, 00:14:11.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:11.682 ==> default: Creating shared folders metadata... 00:14:11.940 ==> default: Starting domain. 00:14:13.878 ==> default: Waiting for domain to get an IP address... 00:14:31.976 ==> default: Waiting for SSH to become available... 00:14:31.976 ==> default: Configuring and enabling network interfaces... 00:14:35.267 default: SSH address: 192.168.121.245:22 00:14:35.267 default: SSH username: vagrant 00:14:35.267 default: SSH auth method: private key 00:14:37.209 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:45.321 ==> default: Mounting SSHFS shared folder... 00:14:46.256 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:14:46.256 ==> default: Checking Mount.. 00:14:47.189 ==> default: Folder Successfully Mounted! 00:14:47.189 ==> default: Running provisioner: file... 00:14:48.120 default: ~/.gitconfig => .gitconfig 00:14:48.685 00:14:48.685 SUCCESS! 00:14:48.685 00:14:48.685 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:14:48.685 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:48.685 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:14:48.685 00:14:48.691 [Pipeline] } 00:14:48.706 [Pipeline] // stage 00:14:48.715 [Pipeline] dir 00:14:48.716 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:14:48.718 [Pipeline] { 00:14:48.730 [Pipeline] catchError 00:14:48.732 [Pipeline] { 00:14:48.744 [Pipeline] sh 00:14:49.025 + vagrant ssh-config --host vagrant 00:14:49.025 + sed -ne /^Host/,$p 00:14:49.025 + tee ssh_conf 00:14:52.325 Host vagrant 00:14:52.325 HostName 192.168.121.245 00:14:52.325 User vagrant 00:14:52.325 Port 22 00:14:52.325 UserKnownHostsFile /dev/null 00:14:52.325 StrictHostKeyChecking no 00:14:52.325 PasswordAuthentication no 00:14:52.325 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:14:52.325 IdentitiesOnly yes 00:14:52.325 LogLevel FATAL 00:14:52.325 ForwardAgent yes 00:14:52.325 ForwardX11 yes 00:14:52.325 00:14:52.337 [Pipeline] withEnv 00:14:52.339 [Pipeline] { 00:14:52.353 [Pipeline] sh 00:14:52.628 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:14:52.628 source /etc/os-release 00:14:52.628 [[ -e /image.version ]] && img=$(< /image.version) 00:14:52.628 # Minimal, systemd-like check. 00:14:52.628 if [[ -e /.dockerenv ]]; then 00:14:52.628 # Clear garbage from the node's name: 00:14:52.628 # agt-er_autotest_547-896 -> autotest_547-896 00:14:52.628 # $HOSTNAME is the actual container id 00:14:52.628 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:14:52.628 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:14:52.628 # We can assume this is a mount from a host where container is running, 00:14:52.628 # so fetch its hostname to easily identify the target swarm worker. 00:14:52.628 container="$(< /etc/hostname) ($agent)" 00:14:52.628 else 00:14:52.628 # Fallback 00:14:52.628 container=$agent 00:14:52.628 fi 00:14:52.628 fi 00:14:52.628 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:14:52.628 00:14:52.896 [Pipeline] } 00:14:52.911 [Pipeline] // withEnv 00:14:52.918 [Pipeline] setCustomBuildProperty 00:14:52.930 [Pipeline] stage 00:14:52.932 [Pipeline] { (Tests) 00:14:52.944 [Pipeline] sh 00:14:53.219 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:14:53.492 [Pipeline] sh 00:14:53.774 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:14:54.048 [Pipeline] timeout 00:14:54.049 Timeout set to expire in 1 hr 30 min 00:14:54.051 [Pipeline] { 00:14:54.066 [Pipeline] sh 00:14:54.345 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:14:54.912 HEAD is now at 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:14:54.924 [Pipeline] sh 00:14:55.206 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:14:55.479 [Pipeline] sh 00:14:55.758 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:14:56.029 [Pipeline] sh 00:14:56.304 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:14:56.563 ++ readlink -f spdk_repo 00:14:56.563 + DIR_ROOT=/home/vagrant/spdk_repo 00:14:56.563 + [[ -n /home/vagrant/spdk_repo ]] 00:14:56.563 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:14:56.563 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:14:56.563 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:14:56.563 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:14:56.563 + [[ -d /home/vagrant/spdk_repo/output ]] 00:14:56.563 + [[ raid-vg-autotest == pkgdep-* ]] 00:14:56.563 + cd /home/vagrant/spdk_repo 00:14:56.563 + source /etc/os-release 00:14:56.563 ++ NAME='Fedora Linux' 00:14:56.563 ++ VERSION='39 (Cloud Edition)' 00:14:56.563 ++ ID=fedora 00:14:56.563 ++ VERSION_ID=39 00:14:56.563 ++ VERSION_CODENAME= 00:14:56.563 ++ PLATFORM_ID=platform:f39 00:14:56.563 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:14:56.563 ++ ANSI_COLOR='0;38;2;60;110;180' 00:14:56.563 ++ LOGO=fedora-logo-icon 00:14:56.563 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:14:56.563 ++ HOME_URL=https://fedoraproject.org/ 00:14:56.563 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:14:56.563 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:14:56.563 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:14:56.563 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:14:56.563 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:14:56.563 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:14:56.563 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:14:56.563 ++ SUPPORT_END=2024-11-12 00:14:56.563 ++ VARIANT='Cloud Edition' 00:14:56.563 ++ VARIANT_ID=cloud 00:14:56.563 + uname -a 00:14:56.563 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:14:56.563 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:57.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:57.129 Hugepages 00:14:57.129 node hugesize free / total 00:14:57.129 node0 1048576kB 0 / 0 00:14:57.129 node0 2048kB 0 / 0 00:14:57.129 00:14:57.129 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:57.129 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:57.129 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:57.129 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:57.129 + rm -f /tmp/spdk-ld-path 00:14:57.129 + source autorun-spdk.conf 00:14:57.129 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:57.129 ++ SPDK_RUN_ASAN=1 00:14:57.129 ++ SPDK_RUN_UBSAN=1 00:14:57.129 ++ SPDK_TEST_RAID=1 00:14:57.129 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:57.129 ++ RUN_NIGHTLY=0 00:14:57.129 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:14:57.129 + [[ -n '' ]] 00:14:57.129 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:14:57.129 + for M in /var/spdk/build-*-manifest.txt 00:14:57.129 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:14:57.129 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:57.129 + for M in /var/spdk/build-*-manifest.txt 00:14:57.129 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:14:57.129 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:57.129 + for M in /var/spdk/build-*-manifest.txt 00:14:57.129 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:14:57.129 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:57.129 ++ uname 00:14:57.129 + [[ Linux == \L\i\n\u\x ]] 00:14:57.129 + sudo dmesg -T 00:14:57.129 + sudo dmesg --clear 00:14:57.129 + dmesg_pid=5374 00:14:57.129 + sudo dmesg -Tw 00:14:57.129 + [[ Fedora Linux == FreeBSD ]] 00:14:57.129 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:57.129 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:57.129 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:14:57.129 + [[ -x /usr/src/fio-static/fio ]] 00:14:57.129 + export FIO_BIN=/usr/src/fio-static/fio 00:14:57.129 + FIO_BIN=/usr/src/fio-static/fio 00:14:57.129 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:14:57.129 + [[ ! -v VFIO_QEMU_BIN ]] 00:14:57.129 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:14:57.129 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:57.129 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:57.129 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:14:57.129 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:57.129 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:57.129 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:57.129 07:12:21 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:14:57.129 07:12:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:57.129 07:12:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:57.129 07:12:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:14:57.130 07:12:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:14:57.130 07:12:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:14:57.130 07:12:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:57.130 07:12:21 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:14:57.130 07:12:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:14:57.130 07:12:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:57.388 07:12:21 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:14:57.388 07:12:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.388 07:12:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:14:57.388 07:12:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:57.388 07:12:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.388 07:12:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.388 07:12:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.388 07:12:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.388 07:12:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.388 07:12:21 -- paths/export.sh@5 -- $ export PATH 00:14:57.388 07:12:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.388 07:12:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:14:57.388 07:12:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:14:57.388 07:12:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086741.XXXXXX 00:14:57.388 07:12:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086741.d7Zl3W 00:14:57.388 07:12:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:14:57.388 07:12:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:14:57.388 07:12:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:14:57.388 07:12:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:57.388 07:12:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:57.388 07:12:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:14:57.388 07:12:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:14:57.388 07:12:21 -- common/autotest_common.sh@10 -- $ set +x 00:14:57.388 07:12:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:14:57.388 07:12:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:14:57.388 07:12:21 -- pm/common@17 -- $ local monitor 00:14:57.388 07:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:57.388 07:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:57.388 07:12:21 -- pm/common@25 -- $ sleep 1 00:14:57.388 07:12:21 -- pm/common@21 -- $ date +%s 00:14:57.388 07:12:21 -- pm/common@21 -- $ date +%s 00:14:57.388 07:12:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086741 00:14:57.388 07:12:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086741 00:14:57.388 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086741_collect-vmstat.pm.log 00:14:57.388 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086741_collect-cpu-load.pm.log 00:14:58.322 07:12:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:14:58.322 07:12:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:14:58.322 07:12:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:14:58.322 07:12:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:58.322 07:12:22 -- spdk/autobuild.sh@16 -- $ date -u 00:14:58.322 Wed Nov 20 07:12:22 AM UTC 2024 00:14:58.322 07:12:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:14:58.322 v25.01-pre-202-g400f484f7 00:14:58.322 07:12:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:14:58.322 07:12:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:14:58.322 07:12:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:58.322 07:12:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:58.322 07:12:22 -- common/autotest_common.sh@10 -- $ set +x 00:14:58.322 ************************************ 00:14:58.322 START TEST asan 00:14:58.322 ************************************ 00:14:58.322 using asan 00:14:58.322 07:12:22 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:14:58.322 00:14:58.322 real 0m0.000s 00:14:58.322 user 0m0.000s 00:14:58.322 sys 0m0.000s 00:14:58.322 07:12:22 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:58.322 07:12:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:14:58.322 ************************************ 00:14:58.322 END TEST asan 00:14:58.322 ************************************ 00:14:58.322 07:12:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:14:58.322 07:12:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:14:58.322 07:12:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:58.322 07:12:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:58.322 07:12:22 -- common/autotest_common.sh@10 -- $ set +x 00:14:58.322 ************************************ 00:14:58.322 START TEST ubsan 00:14:58.322 ************************************ 00:14:58.322 using ubsan 00:14:58.322 07:12:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:14:58.322 00:14:58.322 real 0m0.000s 00:14:58.322 user 0m0.000s 00:14:58.322 sys 0m0.000s 00:14:58.322 07:12:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:58.322 07:12:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:14:58.322 ************************************ 00:14:58.322 END TEST ubsan 00:14:58.322 ************************************ 00:14:58.580 07:12:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:14:58.580 07:12:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:58.580 07:12:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:58.580 07:12:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:58.580 07:12:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:58.580 07:12:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:58.580 07:12:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:58.580 07:12:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:58.580 07:12:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:14:58.580 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:58.580 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:59.146 Using 'verbs' RDMA provider 00:15:12.325 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:15:27.201 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:15:27.201 Creating mk/config.mk...done. 00:15:27.201 Creating mk/cc.flags.mk...done. 00:15:27.201 Type 'make' to build. 00:15:27.201 07:12:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:15:27.201 07:12:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:27.201 07:12:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:27.201 07:12:50 -- common/autotest_common.sh@10 -- $ set +x 00:15:27.201 ************************************ 00:15:27.201 START TEST make 00:15:27.201 ************************************ 00:15:27.201 07:12:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:15:27.201 make[1]: Nothing to be done for 'all'. 00:15:42.077 The Meson build system 00:15:42.077 Version: 1.5.0 00:15:42.077 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:15:42.077 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:42.077 Build type: native build 00:15:42.077 Program cat found: YES (/usr/bin/cat) 00:15:42.077 Project name: DPDK 00:15:42.077 Project version: 24.03.0 00:15:42.077 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:42.077 C linker for the host machine: cc ld.bfd 2.40-14 00:15:42.077 Host machine cpu family: x86_64 00:15:42.077 Host machine cpu: x86_64 00:15:42.077 Message: ## Building in Developer Mode ## 00:15:42.077 Program pkg-config found: YES (/usr/bin/pkg-config) 00:15:42.077 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:42.077 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:42.077 Program python3 found: YES (/usr/bin/python3) 00:15:42.077 Program cat found: YES (/usr/bin/cat) 00:15:42.077 Compiler for C supports arguments -march=native: YES 00:15:42.077 Checking for size of "void *" : 8 00:15:42.077 Checking for size of "void *" : 8 (cached) 00:15:42.077 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:42.077 Library m found: YES 00:15:42.077 Library numa found: YES 00:15:42.077 Has header "numaif.h" : YES 00:15:42.077 Library fdt found: NO 00:15:42.077 Library execinfo found: NO 00:15:42.077 Has header "execinfo.h" : YES 00:15:42.077 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:42.077 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:42.078 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:42.078 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:42.078 Run-time dependency openssl found: YES 3.1.1 00:15:42.078 Run-time dependency libpcap found: YES 1.10.4 00:15:42.078 Has header "pcap.h" with dependency libpcap: YES 00:15:42.078 Compiler for C supports arguments -Wcast-qual: YES 00:15:42.078 Compiler for C supports arguments -Wdeprecated: YES 00:15:42.078 Compiler for C supports arguments -Wformat: YES 00:15:42.078 Compiler for C supports arguments -Wformat-nonliteral: NO 00:15:42.078 Compiler for C supports arguments -Wformat-security: NO 00:15:42.078 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:42.078 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:42.078 Compiler for C supports arguments -Wnested-externs: YES 00:15:42.078 Compiler for C supports arguments -Wold-style-definition: YES 00:15:42.078 Compiler for C supports arguments -Wpointer-arith: YES 00:15:42.078 Compiler for C supports arguments -Wsign-compare: YES 00:15:42.078 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:42.078 Compiler for C supports arguments -Wundef: YES 00:15:42.078 Compiler for C supports arguments -Wwrite-strings: YES 00:15:42.078 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:42.078 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:15:42.078 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:42.078 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:15:42.078 Program objdump found: YES (/usr/bin/objdump) 00:15:42.078 Compiler for C supports arguments -mavx512f: YES 00:15:42.078 Checking if "AVX512 checking" compiles: YES 00:15:42.078 Fetching value of define "__SSE4_2__" : 1 00:15:42.078 Fetching value of define "__AES__" : 1 00:15:42.078 Fetching value of define "__AVX__" : 1 00:15:42.078 Fetching value of define "__AVX2__" : 1 00:15:42.078 Fetching value of define "__AVX512BW__" : (undefined) 00:15:42.078 Fetching value of define "__AVX512CD__" : (undefined) 00:15:42.078 Fetching value of define "__AVX512DQ__" : (undefined) 00:15:42.078 Fetching value of define "__AVX512F__" : (undefined) 00:15:42.078 Fetching value of define "__AVX512VL__" : (undefined) 00:15:42.078 Fetching value of define "__PCLMUL__" : 1 00:15:42.078 Fetching value of define "__RDRND__" : 1 00:15:42.078 Fetching value of define "__RDSEED__" : 1 00:15:42.078 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:15:42.078 Fetching value of define "__znver1__" : (undefined) 00:15:42.078 Fetching value of define "__znver2__" : (undefined) 00:15:42.078 Fetching value of define "__znver3__" : (undefined) 00:15:42.078 Fetching value of define "__znver4__" : (undefined) 00:15:42.078 Library asan found: YES 00:15:42.078 Compiler for C supports arguments -Wno-format-truncation: YES 00:15:42.078 Message: lib/log: Defining dependency "log" 00:15:42.078 Message: lib/kvargs: Defining dependency "kvargs" 00:15:42.078 Message: lib/telemetry: Defining dependency "telemetry" 00:15:42.078 Library rt found: YES 00:15:42.078 Checking for function "getentropy" : NO 00:15:42.078 Message: lib/eal: Defining dependency "eal" 00:15:42.078 Message: lib/ring: Defining dependency "ring" 00:15:42.078 Message: lib/rcu: Defining dependency "rcu" 00:15:42.078 Message: lib/mempool: Defining dependency "mempool" 00:15:42.078 Message: lib/mbuf: Defining dependency "mbuf" 00:15:42.078 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:42.078 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:15:42.078 Compiler for C supports arguments -mpclmul: YES 00:15:42.078 Compiler for C supports arguments -maes: YES 00:15:42.078 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:42.078 Compiler for C supports arguments -mavx512bw: YES 00:15:42.078 Compiler for C supports arguments -mavx512dq: YES 00:15:42.078 Compiler for C supports arguments -mavx512vl: YES 00:15:42.078 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:42.078 Compiler for C supports arguments -mavx2: YES 00:15:42.078 Compiler for C supports arguments -mavx: YES 00:15:42.078 Message: lib/net: Defining dependency "net" 00:15:42.078 Message: lib/meter: Defining dependency "meter" 00:15:42.078 Message: lib/ethdev: Defining dependency "ethdev" 00:15:42.078 Message: lib/pci: Defining dependency "pci" 00:15:42.078 Message: lib/cmdline: Defining dependency "cmdline" 00:15:42.078 Message: lib/hash: Defining dependency "hash" 00:15:42.078 Message: lib/timer: Defining dependency "timer" 00:15:42.078 Message: lib/compressdev: Defining dependency "compressdev" 00:15:42.078 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:42.078 Message: lib/dmadev: Defining dependency "dmadev" 00:15:42.078 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:42.078 Message: lib/power: Defining dependency "power" 00:15:42.078 Message: lib/reorder: Defining dependency "reorder" 00:15:42.078 Message: lib/security: Defining dependency "security" 00:15:42.078 Has header "linux/userfaultfd.h" : YES 00:15:42.078 Has header "linux/vduse.h" : YES 00:15:42.078 Message: lib/vhost: Defining dependency "vhost" 00:15:42.078 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:15:42.078 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:42.078 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:42.078 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:42.078 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:42.078 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:42.078 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:42.078 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:42.078 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:42.078 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:42.078 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:42.078 Configuring doxy-api-html.conf using configuration 00:15:42.078 Configuring doxy-api-man.conf using configuration 00:15:42.078 Program mandb found: YES (/usr/bin/mandb) 00:15:42.078 Program sphinx-build found: NO 00:15:42.078 Configuring rte_build_config.h using configuration 00:15:42.078 Message: 00:15:42.078 ================= 00:15:42.078 Applications Enabled 00:15:42.078 ================= 00:15:42.078 00:15:42.078 apps: 00:15:42.078 00:15:42.078 00:15:42.078 Message: 00:15:42.078 ================= 00:15:42.078 Libraries Enabled 00:15:42.078 ================= 00:15:42.078 00:15:42.078 libs: 00:15:42.078 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:42.078 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:42.078 cryptodev, dmadev, power, reorder, security, vhost, 00:15:42.078 00:15:42.078 Message: 00:15:42.078 =============== 00:15:42.078 Drivers Enabled 00:15:42.078 =============== 00:15:42.078 00:15:42.078 common: 00:15:42.078 00:15:42.078 bus: 00:15:42.078 pci, vdev, 00:15:42.078 mempool: 00:15:42.078 ring, 00:15:42.078 dma: 00:15:42.078 00:15:42.078 net: 00:15:42.078 00:15:42.078 crypto: 00:15:42.078 00:15:42.078 compress: 00:15:42.078 00:15:42.078 vdpa: 00:15:42.078 00:15:42.078 00:15:42.078 Message: 00:15:42.078 ================= 00:15:42.078 Content Skipped 00:15:42.078 ================= 00:15:42.078 00:15:42.078 apps: 00:15:42.078 dumpcap: explicitly disabled via build config 00:15:42.078 graph: explicitly disabled via build config 00:15:42.078 pdump: explicitly disabled via build config 00:15:42.078 proc-info: explicitly disabled via build config 00:15:42.078 test-acl: explicitly disabled via build config 00:15:42.078 test-bbdev: explicitly disabled via build config 00:15:42.078 test-cmdline: explicitly disabled via build config 00:15:42.078 test-compress-perf: explicitly disabled via build config 00:15:42.078 test-crypto-perf: explicitly disabled via build config 00:15:42.078 test-dma-perf: explicitly disabled via build config 00:15:42.078 test-eventdev: explicitly disabled via build config 00:15:42.078 test-fib: explicitly disabled via build config 00:15:42.078 test-flow-perf: explicitly disabled via build config 00:15:42.078 test-gpudev: explicitly disabled via build config 00:15:42.078 test-mldev: explicitly disabled via build config 00:15:42.078 test-pipeline: explicitly disabled via build config 00:15:42.078 test-pmd: explicitly disabled via build config 00:15:42.078 test-regex: explicitly disabled via build config 00:15:42.078 test-sad: explicitly disabled via build config 00:15:42.078 test-security-perf: explicitly disabled via build config 00:15:42.078 00:15:42.078 libs: 00:15:42.078 argparse: explicitly disabled via build config 00:15:42.078 metrics: explicitly disabled via build config 00:15:42.078 acl: explicitly disabled via build config 00:15:42.078 bbdev: explicitly disabled via build config 00:15:42.078 bitratestats: explicitly disabled via build config 00:15:42.078 bpf: explicitly disabled via build config 00:15:42.078 cfgfile: explicitly disabled via build config 00:15:42.078 distributor: explicitly disabled via build config 00:15:42.078 efd: explicitly disabled via build config 00:15:42.078 eventdev: explicitly disabled via build config 00:15:42.078 dispatcher: explicitly disabled via build config 00:15:42.078 gpudev: explicitly disabled via build config 00:15:42.078 gro: explicitly disabled via build config 00:15:42.078 gso: explicitly disabled via build config 00:15:42.078 ip_frag: explicitly disabled via build config 00:15:42.078 jobstats: explicitly disabled via build config 00:15:42.078 latencystats: explicitly disabled via build config 00:15:42.078 lpm: explicitly disabled via build config 00:15:42.078 member: explicitly disabled via build config 00:15:42.078 pcapng: explicitly disabled via build config 00:15:42.078 rawdev: explicitly disabled via build config 00:15:42.078 regexdev: explicitly disabled via build config 00:15:42.078 mldev: explicitly disabled via build config 00:15:42.078 rib: explicitly disabled via build config 00:15:42.078 sched: explicitly disabled via build config 00:15:42.078 stack: explicitly disabled via build config 00:15:42.078 ipsec: explicitly disabled via build config 00:15:42.078 pdcp: explicitly disabled via build config 00:15:42.079 fib: explicitly disabled via build config 00:15:42.079 port: explicitly disabled via build config 00:15:42.079 pdump: explicitly disabled via build config 00:15:42.079 table: explicitly disabled via build config 00:15:42.079 pipeline: explicitly disabled via build config 00:15:42.079 graph: explicitly disabled via build config 00:15:42.079 node: explicitly disabled via build config 00:15:42.079 00:15:42.079 drivers: 00:15:42.079 common/cpt: not in enabled drivers build config 00:15:42.079 common/dpaax: not in enabled drivers build config 00:15:42.079 common/iavf: not in enabled drivers build config 00:15:42.079 common/idpf: not in enabled drivers build config 00:15:42.079 common/ionic: not in enabled drivers build config 00:15:42.079 common/mvep: not in enabled drivers build config 00:15:42.079 common/octeontx: not in enabled drivers build config 00:15:42.079 bus/auxiliary: not in enabled drivers build config 00:15:42.079 bus/cdx: not in enabled drivers build config 00:15:42.079 bus/dpaa: not in enabled drivers build config 00:15:42.079 bus/fslmc: not in enabled drivers build config 00:15:42.079 bus/ifpga: not in enabled drivers build config 00:15:42.079 bus/platform: not in enabled drivers build config 00:15:42.079 bus/uacce: not in enabled drivers build config 00:15:42.079 bus/vmbus: not in enabled drivers build config 00:15:42.079 common/cnxk: not in enabled drivers build config 00:15:42.079 common/mlx5: not in enabled drivers build config 00:15:42.079 common/nfp: not in enabled drivers build config 00:15:42.079 common/nitrox: not in enabled drivers build config 00:15:42.079 common/qat: not in enabled drivers build config 00:15:42.079 common/sfc_efx: not in enabled drivers build config 00:15:42.079 mempool/bucket: not in enabled drivers build config 00:15:42.079 mempool/cnxk: not in enabled drivers build config 00:15:42.079 mempool/dpaa: not in enabled drivers build config 00:15:42.079 mempool/dpaa2: not in enabled drivers build config 00:15:42.079 mempool/octeontx: not in enabled drivers build config 00:15:42.079 mempool/stack: not in enabled drivers build config 00:15:42.079 dma/cnxk: not in enabled drivers build config 00:15:42.079 dma/dpaa: not in enabled drivers build config 00:15:42.079 dma/dpaa2: not in enabled drivers build config 00:15:42.079 dma/hisilicon: not in enabled drivers build config 00:15:42.079 dma/idxd: not in enabled drivers build config 00:15:42.079 dma/ioat: not in enabled drivers build config 00:15:42.079 dma/skeleton: not in enabled drivers build config 00:15:42.079 net/af_packet: not in enabled drivers build config 00:15:42.079 net/af_xdp: not in enabled drivers build config 00:15:42.079 net/ark: not in enabled drivers build config 00:15:42.079 net/atlantic: not in enabled drivers build config 00:15:42.079 net/avp: not in enabled drivers build config 00:15:42.079 net/axgbe: not in enabled drivers build config 00:15:42.079 net/bnx2x: not in enabled drivers build config 00:15:42.079 net/bnxt: not in enabled drivers build config 00:15:42.079 net/bonding: not in enabled drivers build config 00:15:42.079 net/cnxk: not in enabled drivers build config 00:15:42.079 net/cpfl: not in enabled drivers build config 00:15:42.079 net/cxgbe: not in enabled drivers build config 00:15:42.079 net/dpaa: not in enabled drivers build config 00:15:42.079 net/dpaa2: not in enabled drivers build config 00:15:42.079 net/e1000: not in enabled drivers build config 00:15:42.079 net/ena: not in enabled drivers build config 00:15:42.079 net/enetc: not in enabled drivers build config 00:15:42.079 net/enetfec: not in enabled drivers build config 00:15:42.079 net/enic: not in enabled drivers build config 00:15:42.079 net/failsafe: not in enabled drivers build config 00:15:42.079 net/fm10k: not in enabled drivers build config 00:15:42.079 net/gve: not in enabled drivers build config 00:15:42.079 net/hinic: not in enabled drivers build config 00:15:42.079 net/hns3: not in enabled drivers build config 00:15:42.079 net/i40e: not in enabled drivers build config 00:15:42.079 net/iavf: not in enabled drivers build config 00:15:42.079 net/ice: not in enabled drivers build config 00:15:42.079 net/idpf: not in enabled drivers build config 00:15:42.079 net/igc: not in enabled drivers build config 00:15:42.079 net/ionic: not in enabled drivers build config 00:15:42.079 net/ipn3ke: not in enabled drivers build config 00:15:42.079 net/ixgbe: not in enabled drivers build config 00:15:42.079 net/mana: not in enabled drivers build config 00:15:42.079 net/memif: not in enabled drivers build config 00:15:42.079 net/mlx4: not in enabled drivers build config 00:15:42.079 net/mlx5: not in enabled drivers build config 00:15:42.079 net/mvneta: not in enabled drivers build config 00:15:42.079 net/mvpp2: not in enabled drivers build config 00:15:42.079 net/netvsc: not in enabled drivers build config 00:15:42.079 net/nfb: not in enabled drivers build config 00:15:42.079 net/nfp: not in enabled drivers build config 00:15:42.079 net/ngbe: not in enabled drivers build config 00:15:42.079 net/null: not in enabled drivers build config 00:15:42.079 net/octeontx: not in enabled drivers build config 00:15:42.079 net/octeon_ep: not in enabled drivers build config 00:15:42.079 net/pcap: not in enabled drivers build config 00:15:42.079 net/pfe: not in enabled drivers build config 00:15:42.079 net/qede: not in enabled drivers build config 00:15:42.079 net/ring: not in enabled drivers build config 00:15:42.079 net/sfc: not in enabled drivers build config 00:15:42.079 net/softnic: not in enabled drivers build config 00:15:42.079 net/tap: not in enabled drivers build config 00:15:42.079 net/thunderx: not in enabled drivers build config 00:15:42.079 net/txgbe: not in enabled drivers build config 00:15:42.079 net/vdev_netvsc: not in enabled drivers build config 00:15:42.079 net/vhost: not in enabled drivers build config 00:15:42.079 net/virtio: not in enabled drivers build config 00:15:42.079 net/vmxnet3: not in enabled drivers build config 00:15:42.079 raw/*: missing internal dependency, "rawdev" 00:15:42.079 crypto/armv8: not in enabled drivers build config 00:15:42.079 crypto/bcmfs: not in enabled drivers build config 00:15:42.079 crypto/caam_jr: not in enabled drivers build config 00:15:42.079 crypto/ccp: not in enabled drivers build config 00:15:42.079 crypto/cnxk: not in enabled drivers build config 00:15:42.079 crypto/dpaa_sec: not in enabled drivers build config 00:15:42.079 crypto/dpaa2_sec: not in enabled drivers build config 00:15:42.079 crypto/ipsec_mb: not in enabled drivers build config 00:15:42.079 crypto/mlx5: not in enabled drivers build config 00:15:42.079 crypto/mvsam: not in enabled drivers build config 00:15:42.079 crypto/nitrox: not in enabled drivers build config 00:15:42.079 crypto/null: not in enabled drivers build config 00:15:42.079 crypto/octeontx: not in enabled drivers build config 00:15:42.079 crypto/openssl: not in enabled drivers build config 00:15:42.079 crypto/scheduler: not in enabled drivers build config 00:15:42.079 crypto/uadk: not in enabled drivers build config 00:15:42.079 crypto/virtio: not in enabled drivers build config 00:15:42.079 compress/isal: not in enabled drivers build config 00:15:42.079 compress/mlx5: not in enabled drivers build config 00:15:42.079 compress/nitrox: not in enabled drivers build config 00:15:42.079 compress/octeontx: not in enabled drivers build config 00:15:42.079 compress/zlib: not in enabled drivers build config 00:15:42.079 regex/*: missing internal dependency, "regexdev" 00:15:42.079 ml/*: missing internal dependency, "mldev" 00:15:42.079 vdpa/ifc: not in enabled drivers build config 00:15:42.079 vdpa/mlx5: not in enabled drivers build config 00:15:42.079 vdpa/nfp: not in enabled drivers build config 00:15:42.079 vdpa/sfc: not in enabled drivers build config 00:15:42.079 event/*: missing internal dependency, "eventdev" 00:15:42.079 baseband/*: missing internal dependency, "bbdev" 00:15:42.079 gpu/*: missing internal dependency, "gpudev" 00:15:42.079 00:15:42.079 00:15:42.079 Build targets in project: 85 00:15:42.079 00:15:42.079 DPDK 24.03.0 00:15:42.079 00:15:42.079 User defined options 00:15:42.079 buildtype : debug 00:15:42.079 default_library : shared 00:15:42.079 libdir : lib 00:15:42.079 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:42.079 b_sanitize : address 00:15:42.079 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:15:42.079 c_link_args : 00:15:42.079 cpu_instruction_set: native 00:15:42.079 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:42.079 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:42.079 enable_docs : false 00:15:42.079 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:15:42.079 enable_kmods : false 00:15:42.079 max_lcores : 128 00:15:42.079 tests : false 00:15:42.079 00:15:42.079 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:42.079 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:42.079 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:15:42.079 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:42.079 [3/268] Linking static target lib/librte_kvargs.a 00:15:42.079 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:42.079 [5/268] Linking static target lib/librte_log.a 00:15:42.079 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:42.079 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.080 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:42.080 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:42.080 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:42.080 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:42.080 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:42.080 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:42.080 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:42.080 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:42.080 [16/268] Linking static target lib/librte_telemetry.a 00:15:42.080 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:42.080 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.080 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:42.080 [20/268] Linking target lib/librte_log.so.24.1 00:15:42.080 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:42.338 [22/268] Linking target lib/librte_kvargs.so.24.1 00:15:42.596 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:42.596 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:42.596 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:42.596 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:42.596 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:42.596 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.596 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:42.596 [30/268] Linking target lib/librte_telemetry.so.24.1 00:15:42.596 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:42.854 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:42.854 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:42.854 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:42.854 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:43.113 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:43.113 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:43.371 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:43.371 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:43.628 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:43.628 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:43.628 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:43.628 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:43.628 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:43.887 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:43.887 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:43.887 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:44.145 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:44.145 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:44.145 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:44.403 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:44.403 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:15:44.662 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:44.662 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:44.662 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:44.920 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:44.920 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:44.920 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:44.920 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:44.920 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:44.921 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:15:45.180 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:45.180 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:45.438 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:45.438 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:45.438 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:45.438 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:45.695 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:45.953 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:45.953 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:45.953 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:45.953 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:46.211 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:46.211 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:46.211 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:46.211 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:46.211 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:46.211 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:46.211 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:46.777 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:46.777 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:46.777 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:46.777 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:46.777 [84/268] Linking static target lib/librte_ring.a 00:15:47.035 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:47.035 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:47.035 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:47.035 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:47.035 [89/268] Linking static target lib/librte_rcu.a 00:15:47.293 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:47.293 [91/268] Linking static target lib/librte_eal.a 00:15:47.293 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:47.552 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:47.552 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:47.552 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:47.552 [96/268] Linking static target lib/librte_mempool.a 00:15:47.552 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:47.552 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:15:47.552 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:15:47.810 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:48.093 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:48.093 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:48.358 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:48.358 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:48.358 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:48.358 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:48.358 [107/268] Linking static target lib/librte_mbuf.a 00:15:48.617 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:48.617 [109/268] Linking static target lib/librte_net.a 00:15:48.617 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:48.617 [111/268] Linking static target lib/librte_meter.a 00:15:48.876 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:48.876 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:48.876 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:48.876 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:49.134 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.134 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.134 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:49.701 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:49.701 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.701 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:49.960 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:49.960 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:50.218 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:50.218 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:50.477 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:50.477 [127/268] Linking static target lib/librte_pci.a 00:15:50.477 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:50.477 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:50.736 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:50.736 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:50.736 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:50.736 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:50.994 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:50.994 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:15:50.994 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:50.994 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:50.994 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:50.994 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:50.994 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:50.994 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:50.994 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:50.994 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:50.994 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:51.584 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:51.584 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:51.584 [147/268] Linking static target lib/librte_cmdline.a 00:15:51.584 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:51.843 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:51.843 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:51.843 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:52.102 [152/268] Linking static target lib/librte_ethdev.a 00:15:52.102 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:52.361 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:52.361 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:52.361 [156/268] Linking static target lib/librte_timer.a 00:15:52.361 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:52.361 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:52.361 [159/268] Linking static target lib/librte_hash.a 00:15:52.620 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:52.620 [161/268] Linking static target lib/librte_compressdev.a 00:15:52.620 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:52.879 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:52.879 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:52.879 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:53.137 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:53.396 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:53.396 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:53.396 [169/268] Linking static target lib/librte_dmadev.a 00:15:53.396 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:53.396 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:53.654 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:53.654 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:53.654 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:53.654 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:53.912 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:54.171 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:54.428 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:54.428 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:54.428 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:54.428 [181/268] Linking static target lib/librte_cryptodev.a 00:15:54.428 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:54.429 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:54.429 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:54.686 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:54.686 [186/268] Linking static target lib/librte_power.a 00:15:55.250 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:55.250 [188/268] Linking static target lib/librte_reorder.a 00:15:55.250 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:55.250 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:55.508 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:55.508 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:55.508 [193/268] Linking static target lib/librte_security.a 00:15:55.765 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:55.765 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:56.023 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:56.281 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:56.540 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:56.540 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:56.540 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:56.798 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:56.798 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:57.056 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:57.056 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:57.314 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:57.314 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:57.314 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:57.639 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:57.639 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:57.639 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:57.639 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:57.933 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:57.933 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:57.933 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:57.933 [215/268] Linking static target drivers/librte_bus_vdev.a 00:15:57.933 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:57.933 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:57.934 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:57.934 [219/268] Linking static target drivers/librte_bus_pci.a 00:15:57.934 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:57.934 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:58.192 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:58.192 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:58.192 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:58.192 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:58.192 [226/268] Linking static target drivers/librte_mempool_ring.a 00:15:58.759 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:59.326 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:59.583 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:59.583 [230/268] Linking target lib/librte_eal.so.24.1 00:15:59.841 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:59.841 [232/268] Linking target lib/librte_ring.so.24.1 00:15:59.841 [233/268] Linking target lib/librte_pci.so.24.1 00:15:59.841 [234/268] Linking target lib/librte_meter.so.24.1 00:15:59.841 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:15:59.841 [236/268] Linking target lib/librte_timer.so.24.1 00:15:59.841 [237/268] Linking target lib/librte_dmadev.so.24.1 00:15:59.841 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:16:00.099 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:16:00.099 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:16:00.099 [241/268] Linking target lib/librte_rcu.so.24.1 00:16:00.099 [242/268] Linking target lib/librte_mempool.so.24.1 00:16:00.099 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:16:00.099 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:16:00.099 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:16:00.099 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:16:00.099 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:16:00.099 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:16:00.099 [249/268] Linking target lib/librte_mbuf.so.24.1 00:16:00.357 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:16:00.357 [251/268] Linking target lib/librte_reorder.so.24.1 00:16:00.357 [252/268] Linking target lib/librte_net.so.24.1 00:16:00.357 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:16:00.357 [254/268] Linking target lib/librte_compressdev.so.24.1 00:16:00.614 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:16:00.614 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:16:00.614 [257/268] Linking target lib/librte_security.so.24.1 00:16:00.614 [258/268] Linking target lib/librte_hash.so.24.1 00:16:00.614 [259/268] Linking target lib/librte_cmdline.so.24.1 00:16:00.614 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:00.614 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:16:00.614 [262/268] Linking target lib/librte_ethdev.so.24.1 00:16:00.873 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:16:00.873 [264/268] Linking target lib/librte_power.so.24.1 00:16:04.169 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:16:04.169 [266/268] Linking static target lib/librte_vhost.a 00:16:05.108 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:16:05.376 [268/268] Linking target lib/librte_vhost.so.24.1 00:16:05.376 INFO: autodetecting backend as ninja 00:16:05.376 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:16:27.297 CC lib/ut/ut.o 00:16:27.297 CC lib/log/log.o 00:16:27.297 CC lib/log/log_flags.o 00:16:27.297 CC lib/log/log_deprecated.o 00:16:27.297 CC lib/ut_mock/mock.o 00:16:27.297 LIB libspdk_ut_mock.a 00:16:27.297 LIB libspdk_ut.a 00:16:27.297 LIB libspdk_log.a 00:16:27.297 SO libspdk_ut_mock.so.6.0 00:16:27.297 SO libspdk_ut.so.2.0 00:16:27.297 SO libspdk_log.so.7.1 00:16:27.297 SYMLINK libspdk_ut_mock.so 00:16:27.297 SYMLINK libspdk_ut.so 00:16:27.297 SYMLINK libspdk_log.so 00:16:27.297 CC lib/util/base64.o 00:16:27.297 CC lib/util/bit_array.o 00:16:27.297 CC lib/util/cpuset.o 00:16:27.297 CC lib/util/crc32.o 00:16:27.297 CC lib/util/crc16.o 00:16:27.297 CC lib/util/crc32c.o 00:16:27.298 CC lib/dma/dma.o 00:16:27.298 CC lib/ioat/ioat.o 00:16:27.298 CXX lib/trace_parser/trace.o 00:16:27.298 CC lib/vfio_user/host/vfio_user_pci.o 00:16:27.298 CC lib/util/crc32_ieee.o 00:16:27.298 CC lib/util/crc64.o 00:16:27.298 CC lib/util/dif.o 00:16:27.298 LIB libspdk_dma.a 00:16:27.298 CC lib/util/fd.o 00:16:27.298 CC lib/util/fd_group.o 00:16:27.298 CC lib/util/file.o 00:16:27.298 SO libspdk_dma.so.5.0 00:16:27.298 LIB libspdk_ioat.a 00:16:27.298 CC lib/vfio_user/host/vfio_user.o 00:16:27.298 CC lib/util/hexlify.o 00:16:27.298 SO libspdk_ioat.so.7.0 00:16:27.298 SYMLINK libspdk_dma.so 00:16:27.298 CC lib/util/iov.o 00:16:27.298 SYMLINK libspdk_ioat.so 00:16:27.298 CC lib/util/math.o 00:16:27.298 CC lib/util/net.o 00:16:27.298 CC lib/util/pipe.o 00:16:27.298 CC lib/util/strerror_tls.o 00:16:27.298 CC lib/util/string.o 00:16:27.298 CC lib/util/uuid.o 00:16:27.298 CC lib/util/xor.o 00:16:27.298 CC lib/util/zipf.o 00:16:27.298 CC lib/util/md5.o 00:16:27.298 LIB libspdk_vfio_user.a 00:16:27.298 SO libspdk_vfio_user.so.5.0 00:16:27.298 SYMLINK libspdk_vfio_user.so 00:16:27.298 LIB libspdk_util.a 00:16:27.298 SO libspdk_util.so.10.1 00:16:27.298 SYMLINK libspdk_util.so 00:16:27.298 LIB libspdk_trace_parser.a 00:16:27.298 SO libspdk_trace_parser.so.6.0 00:16:27.298 SYMLINK libspdk_trace_parser.so 00:16:27.298 CC lib/json/json_parse.o 00:16:27.298 CC lib/json/json_util.o 00:16:27.298 CC lib/conf/conf.o 00:16:27.298 CC lib/json/json_write.o 00:16:27.298 CC lib/env_dpdk/env.o 00:16:27.298 CC lib/env_dpdk/memory.o 00:16:27.298 CC lib/env_dpdk/pci.o 00:16:27.298 CC lib/rdma_utils/rdma_utils.o 00:16:27.298 CC lib/idxd/idxd.o 00:16:27.298 CC lib/vmd/vmd.o 00:16:27.298 LIB libspdk_conf.a 00:16:27.298 SO libspdk_conf.so.6.0 00:16:27.298 CC lib/vmd/led.o 00:16:27.298 LIB libspdk_rdma_utils.a 00:16:27.298 CC lib/idxd/idxd_user.o 00:16:27.298 SYMLINK libspdk_conf.so 00:16:27.298 SO libspdk_rdma_utils.so.1.0 00:16:27.298 CC lib/env_dpdk/init.o 00:16:27.556 SYMLINK libspdk_rdma_utils.so 00:16:27.556 CC lib/env_dpdk/threads.o 00:16:27.556 LIB libspdk_json.a 00:16:27.556 SO libspdk_json.so.6.0 00:16:27.556 CC lib/env_dpdk/pci_ioat.o 00:16:27.556 SYMLINK libspdk_json.so 00:16:27.556 CC lib/env_dpdk/pci_virtio.o 00:16:27.556 CC lib/env_dpdk/pci_vmd.o 00:16:27.813 CC lib/env_dpdk/pci_idxd.o 00:16:27.813 CC lib/env_dpdk/pci_event.o 00:16:27.813 CC lib/env_dpdk/sigbus_handler.o 00:16:27.813 CC lib/env_dpdk/pci_dpdk.o 00:16:27.813 CC lib/idxd/idxd_kernel.o 00:16:27.813 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:27.813 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:28.071 LIB libspdk_vmd.a 00:16:28.071 SO libspdk_vmd.so.6.0 00:16:28.071 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:28.071 CC lib/jsonrpc/jsonrpc_server.o 00:16:28.071 CC lib/rdma_provider/common.o 00:16:28.071 CC lib/jsonrpc/jsonrpc_client.o 00:16:28.071 LIB libspdk_idxd.a 00:16:28.071 SYMLINK libspdk_vmd.so 00:16:28.071 CC lib/rdma_provider/rdma_provider_verbs.o 00:16:28.071 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:28.071 SO libspdk_idxd.so.12.1 00:16:28.071 SYMLINK libspdk_idxd.so 00:16:28.329 LIB libspdk_rdma_provider.a 00:16:28.329 SO libspdk_rdma_provider.so.7.0 00:16:28.329 LIB libspdk_jsonrpc.a 00:16:28.588 SYMLINK libspdk_rdma_provider.so 00:16:28.588 SO libspdk_jsonrpc.so.6.0 00:16:28.588 SYMLINK libspdk_jsonrpc.so 00:16:28.846 CC lib/rpc/rpc.o 00:16:28.846 LIB libspdk_env_dpdk.a 00:16:29.103 SO libspdk_env_dpdk.so.15.1 00:16:29.103 LIB libspdk_rpc.a 00:16:29.103 SO libspdk_rpc.so.6.0 00:16:29.361 SYMLINK libspdk_rpc.so 00:16:29.361 SYMLINK libspdk_env_dpdk.so 00:16:29.361 CC lib/trace/trace.o 00:16:29.361 CC lib/trace/trace_flags.o 00:16:29.361 CC lib/trace/trace_rpc.o 00:16:29.361 CC lib/keyring/keyring_rpc.o 00:16:29.361 CC lib/notify/notify.o 00:16:29.361 CC lib/keyring/keyring.o 00:16:29.361 CC lib/notify/notify_rpc.o 00:16:29.618 LIB libspdk_notify.a 00:16:29.618 SO libspdk_notify.so.6.0 00:16:29.875 LIB libspdk_trace.a 00:16:29.875 SYMLINK libspdk_notify.so 00:16:29.875 SO libspdk_trace.so.11.0 00:16:29.875 LIB libspdk_keyring.a 00:16:29.875 SO libspdk_keyring.so.2.0 00:16:29.875 SYMLINK libspdk_trace.so 00:16:29.875 SYMLINK libspdk_keyring.so 00:16:30.132 CC lib/thread/thread.o 00:16:30.132 CC lib/sock/sock.o 00:16:30.132 CC lib/sock/sock_rpc.o 00:16:30.132 CC lib/thread/iobuf.o 00:16:31.068 LIB libspdk_sock.a 00:16:31.068 SO libspdk_sock.so.10.0 00:16:31.068 SYMLINK libspdk_sock.so 00:16:31.326 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:31.326 CC lib/nvme/nvme_ctrlr.o 00:16:31.326 CC lib/nvme/nvme_fabric.o 00:16:31.326 CC lib/nvme/nvme_ns_cmd.o 00:16:31.326 CC lib/nvme/nvme_ns.o 00:16:31.326 CC lib/nvme/nvme_pcie.o 00:16:31.326 CC lib/nvme/nvme_pcie_common.o 00:16:31.326 CC lib/nvme/nvme_qpair.o 00:16:31.326 CC lib/nvme/nvme.o 00:16:32.259 CC lib/nvme/nvme_quirks.o 00:16:32.259 CC lib/nvme/nvme_transport.o 00:16:32.259 LIB libspdk_thread.a 00:16:32.259 SO libspdk_thread.so.11.0 00:16:32.517 SYMLINK libspdk_thread.so 00:16:32.517 CC lib/nvme/nvme_discovery.o 00:16:32.517 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:32.517 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:32.775 CC lib/nvme/nvme_tcp.o 00:16:32.775 CC lib/nvme/nvme_opal.o 00:16:32.775 CC lib/nvme/nvme_io_msg.o 00:16:32.775 CC lib/nvme/nvme_poll_group.o 00:16:32.775 CC lib/nvme/nvme_zns.o 00:16:33.032 CC lib/nvme/nvme_stubs.o 00:16:33.290 CC lib/nvme/nvme_auth.o 00:16:33.290 CC lib/nvme/nvme_cuse.o 00:16:33.290 CC lib/nvme/nvme_rdma.o 00:16:33.606 CC lib/accel/accel.o 00:16:33.606 CC lib/blob/blobstore.o 00:16:33.606 CC lib/init/json_config.o 00:16:33.864 CC lib/virtio/virtio.o 00:16:33.864 CC lib/fsdev/fsdev.o 00:16:33.864 CC lib/init/subsystem.o 00:16:34.122 CC lib/init/subsystem_rpc.o 00:16:34.122 CC lib/virtio/virtio_vhost_user.o 00:16:34.380 CC lib/accel/accel_rpc.o 00:16:34.380 CC lib/accel/accel_sw.o 00:16:34.380 CC lib/virtio/virtio_vfio_user.o 00:16:34.380 CC lib/init/rpc.o 00:16:34.380 CC lib/fsdev/fsdev_io.o 00:16:34.638 LIB libspdk_init.a 00:16:34.638 CC lib/virtio/virtio_pci.o 00:16:34.638 CC lib/blob/request.o 00:16:34.638 SO libspdk_init.so.6.0 00:16:34.638 CC lib/blob/zeroes.o 00:16:34.638 CC lib/blob/blob_bs_dev.o 00:16:34.638 SYMLINK libspdk_init.so 00:16:34.638 CC lib/fsdev/fsdev_rpc.o 00:16:34.896 CC lib/event/app.o 00:16:34.896 CC lib/event/reactor.o 00:16:34.896 CC lib/event/log_rpc.o 00:16:34.896 LIB libspdk_virtio.a 00:16:34.896 LIB libspdk_fsdev.a 00:16:34.896 CC lib/event/app_rpc.o 00:16:34.896 SO libspdk_virtio.so.7.0 00:16:34.896 SO libspdk_fsdev.so.2.0 00:16:34.896 CC lib/event/scheduler_static.o 00:16:35.153 LIB libspdk_nvme.a 00:16:35.153 SYMLINK libspdk_virtio.so 00:16:35.153 SYMLINK libspdk_fsdev.so 00:16:35.153 LIB libspdk_accel.a 00:16:35.153 SO libspdk_accel.so.16.0 00:16:35.153 SYMLINK libspdk_accel.so 00:16:35.153 SO libspdk_nvme.so.15.0 00:16:35.153 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:35.410 CC lib/bdev/bdev_rpc.o 00:16:35.410 CC lib/bdev/bdev.o 00:16:35.410 CC lib/bdev/bdev_zone.o 00:16:35.410 CC lib/bdev/part.o 00:16:35.410 CC lib/bdev/scsi_nvme.o 00:16:35.410 LIB libspdk_event.a 00:16:35.687 SYMLINK libspdk_nvme.so 00:16:35.687 SO libspdk_event.so.14.0 00:16:35.687 SYMLINK libspdk_event.so 00:16:35.981 LIB libspdk_fuse_dispatcher.a 00:16:36.239 SO libspdk_fuse_dispatcher.so.1.0 00:16:36.239 SYMLINK libspdk_fuse_dispatcher.so 00:16:38.140 LIB libspdk_blob.a 00:16:38.140 SO libspdk_blob.so.11.0 00:16:38.398 SYMLINK libspdk_blob.so 00:16:38.656 CC lib/blobfs/blobfs.o 00:16:38.656 CC lib/blobfs/tree.o 00:16:38.656 CC lib/lvol/lvol.o 00:16:39.224 LIB libspdk_bdev.a 00:16:39.224 SO libspdk_bdev.so.17.0 00:16:39.224 SYMLINK libspdk_bdev.so 00:16:39.483 CC lib/nbd/nbd_rpc.o 00:16:39.483 CC lib/nbd/nbd.o 00:16:39.483 CC lib/ftl/ftl_core.o 00:16:39.483 CC lib/ftl/ftl_init.o 00:16:39.483 CC lib/ublk/ublk.o 00:16:39.483 CC lib/ublk/ublk_rpc.o 00:16:39.483 CC lib/nvmf/ctrlr.o 00:16:39.483 CC lib/scsi/dev.o 00:16:39.741 LIB libspdk_blobfs.a 00:16:39.741 SO libspdk_blobfs.so.10.0 00:16:39.741 CC lib/scsi/lun.o 00:16:39.741 CC lib/ftl/ftl_layout.o 00:16:39.741 SYMLINK libspdk_blobfs.so 00:16:39.741 CC lib/ftl/ftl_debug.o 00:16:39.741 LIB libspdk_lvol.a 00:16:39.999 CC lib/scsi/port.o 00:16:39.999 CC lib/scsi/scsi.o 00:16:39.999 SO libspdk_lvol.so.10.0 00:16:39.999 SYMLINK libspdk_lvol.so 00:16:39.999 CC lib/ftl/ftl_io.o 00:16:39.999 CC lib/ftl/ftl_sb.o 00:16:39.999 CC lib/scsi/scsi_bdev.o 00:16:39.999 CC lib/ftl/ftl_l2p.o 00:16:40.258 CC lib/scsi/scsi_pr.o 00:16:40.258 CC lib/ftl/ftl_l2p_flat.o 00:16:40.258 LIB libspdk_nbd.a 00:16:40.258 SO libspdk_nbd.so.7.0 00:16:40.258 SYMLINK libspdk_nbd.so 00:16:40.258 CC lib/nvmf/ctrlr_discovery.o 00:16:40.258 CC lib/nvmf/ctrlr_bdev.o 00:16:40.258 CC lib/nvmf/subsystem.o 00:16:40.258 CC lib/nvmf/nvmf.o 00:16:40.258 CC lib/nvmf/nvmf_rpc.o 00:16:40.517 CC lib/ftl/ftl_nv_cache.o 00:16:40.517 LIB libspdk_ublk.a 00:16:40.517 SO libspdk_ublk.so.3.0 00:16:40.517 CC lib/scsi/scsi_rpc.o 00:16:40.517 SYMLINK libspdk_ublk.so 00:16:40.517 CC lib/scsi/task.o 00:16:40.776 CC lib/nvmf/transport.o 00:16:40.776 CC lib/nvmf/tcp.o 00:16:40.776 LIB libspdk_scsi.a 00:16:40.776 SO libspdk_scsi.so.9.0 00:16:41.035 CC lib/nvmf/stubs.o 00:16:41.035 SYMLINK libspdk_scsi.so 00:16:41.035 CC lib/ftl/ftl_band.o 00:16:41.294 CC lib/nvmf/mdns_server.o 00:16:41.552 CC lib/nvmf/rdma.o 00:16:41.552 CC lib/nvmf/auth.o 00:16:41.552 CC lib/ftl/ftl_band_ops.o 00:16:41.552 CC lib/iscsi/conn.o 00:16:41.811 CC lib/vhost/vhost.o 00:16:41.811 CC lib/vhost/vhost_rpc.o 00:16:41.811 CC lib/vhost/vhost_scsi.o 00:16:41.811 CC lib/iscsi/init_grp.o 00:16:42.069 CC lib/iscsi/iscsi.o 00:16:42.069 CC lib/ftl/ftl_writer.o 00:16:42.328 CC lib/ftl/ftl_rq.o 00:16:42.328 CC lib/ftl/ftl_reloc.o 00:16:42.328 CC lib/iscsi/param.o 00:16:42.328 CC lib/iscsi/portal_grp.o 00:16:42.587 CC lib/vhost/vhost_blk.o 00:16:42.587 CC lib/vhost/rte_vhost_user.o 00:16:42.845 CC lib/ftl/ftl_l2p_cache.o 00:16:42.845 CC lib/ftl/ftl_p2l.o 00:16:42.845 CC lib/iscsi/tgt_node.o 00:16:42.845 CC lib/iscsi/iscsi_subsystem.o 00:16:42.845 CC lib/iscsi/iscsi_rpc.o 00:16:43.104 CC lib/ftl/ftl_p2l_log.o 00:16:43.362 CC lib/ftl/mngt/ftl_mngt.o 00:16:43.362 CC lib/iscsi/task.o 00:16:43.362 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:43.362 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:43.362 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:43.362 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:43.620 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:43.620 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:43.620 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:43.620 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:43.620 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:43.620 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:43.878 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:43.878 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:43.878 LIB libspdk_vhost.a 00:16:43.878 CC lib/ftl/utils/ftl_conf.o 00:16:43.878 CC lib/ftl/utils/ftl_md.o 00:16:43.878 CC lib/ftl/utils/ftl_mempool.o 00:16:43.878 CC lib/ftl/utils/ftl_bitmap.o 00:16:43.878 SO libspdk_vhost.so.8.0 00:16:43.878 LIB libspdk_iscsi.a 00:16:44.135 SO libspdk_iscsi.so.8.0 00:16:44.135 CC lib/ftl/utils/ftl_property.o 00:16:44.135 SYMLINK libspdk_vhost.so 00:16:44.135 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:44.135 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:44.135 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:44.135 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:44.135 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:44.135 SYMLINK libspdk_iscsi.so 00:16:44.135 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:44.394 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:44.394 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:44.394 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:44.394 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:44.394 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:44.394 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:44.394 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:44.394 CC lib/ftl/base/ftl_base_dev.o 00:16:44.394 CC lib/ftl/base/ftl_base_bdev.o 00:16:44.394 LIB libspdk_nvmf.a 00:16:44.652 CC lib/ftl/ftl_trace.o 00:16:44.652 SO libspdk_nvmf.so.20.0 00:16:44.911 LIB libspdk_ftl.a 00:16:44.911 SYMLINK libspdk_nvmf.so 00:16:45.170 SO libspdk_ftl.so.9.0 00:16:45.489 SYMLINK libspdk_ftl.so 00:16:46.057 CC module/env_dpdk/env_dpdk_rpc.o 00:16:46.057 CC module/fsdev/aio/fsdev_aio.o 00:16:46.057 CC module/keyring/linux/keyring.o 00:16:46.057 CC module/blob/bdev/blob_bdev.o 00:16:46.057 CC module/accel/error/accel_error.o 00:16:46.057 CC module/keyring/file/keyring.o 00:16:46.057 CC module/sock/posix/posix.o 00:16:46.057 CC module/accel/dsa/accel_dsa.o 00:16:46.057 CC module/accel/ioat/accel_ioat.o 00:16:46.057 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:46.057 LIB libspdk_env_dpdk_rpc.a 00:16:46.057 SO libspdk_env_dpdk_rpc.so.6.0 00:16:46.316 CC module/keyring/file/keyring_rpc.o 00:16:46.316 SYMLINK libspdk_env_dpdk_rpc.so 00:16:46.316 CC module/accel/dsa/accel_dsa_rpc.o 00:16:46.316 CC module/keyring/linux/keyring_rpc.o 00:16:46.316 CC module/accel/ioat/accel_ioat_rpc.o 00:16:46.316 CC module/accel/error/accel_error_rpc.o 00:16:46.316 LIB libspdk_scheduler_dynamic.a 00:16:46.316 SO libspdk_scheduler_dynamic.so.4.0 00:16:46.316 LIB libspdk_keyring_file.a 00:16:46.316 LIB libspdk_keyring_linux.a 00:16:46.316 LIB libspdk_accel_dsa.a 00:16:46.316 SO libspdk_keyring_file.so.2.0 00:16:46.316 LIB libspdk_blob_bdev.a 00:16:46.316 SO libspdk_keyring_linux.so.1.0 00:16:46.316 SYMLINK libspdk_scheduler_dynamic.so 00:16:46.316 LIB libspdk_accel_ioat.a 00:16:46.316 SO libspdk_accel_dsa.so.5.0 00:16:46.316 LIB libspdk_accel_error.a 00:16:46.316 SO libspdk_blob_bdev.so.11.0 00:16:46.575 SO libspdk_accel_ioat.so.6.0 00:16:46.575 SYMLINK libspdk_keyring_file.so 00:16:46.575 SYMLINK libspdk_accel_dsa.so 00:16:46.575 SYMLINK libspdk_keyring_linux.so 00:16:46.575 SO libspdk_accel_error.so.2.0 00:16:46.575 SYMLINK libspdk_blob_bdev.so 00:16:46.575 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:46.575 CC module/fsdev/aio/linux_aio_mgr.o 00:16:46.575 SYMLINK libspdk_accel_ioat.so 00:16:46.575 SYMLINK libspdk_accel_error.so 00:16:46.575 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:46.575 CC module/scheduler/gscheduler/gscheduler.o 00:16:46.575 CC module/accel/iaa/accel_iaa.o 00:16:46.833 LIB libspdk_scheduler_dpdk_governor.a 00:16:46.833 LIB libspdk_scheduler_gscheduler.a 00:16:46.833 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:46.833 SO libspdk_scheduler_gscheduler.so.4.0 00:16:46.833 CC module/bdev/delay/vbdev_delay.o 00:16:46.833 CC module/blobfs/bdev/blobfs_bdev.o 00:16:46.833 CC module/bdev/error/vbdev_error.o 00:16:46.833 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:46.833 CC module/bdev/gpt/gpt.o 00:16:46.833 SYMLINK libspdk_scheduler_gscheduler.so 00:16:46.833 CC module/bdev/gpt/vbdev_gpt.o 00:16:46.833 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:46.833 CC module/bdev/lvol/vbdev_lvol.o 00:16:46.833 LIB libspdk_fsdev_aio.a 00:16:47.092 SO libspdk_fsdev_aio.so.1.0 00:16:47.092 LIB libspdk_sock_posix.a 00:16:47.092 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:47.092 SO libspdk_sock_posix.so.6.0 00:16:47.092 CC module/accel/iaa/accel_iaa_rpc.o 00:16:47.092 SYMLINK libspdk_fsdev_aio.so 00:16:47.092 CC module/bdev/error/vbdev_error_rpc.o 00:16:47.092 SYMLINK libspdk_sock_posix.so 00:16:47.092 LIB libspdk_accel_iaa.a 00:16:47.351 LIB libspdk_blobfs_bdev.a 00:16:47.351 LIB libspdk_bdev_gpt.a 00:16:47.351 SO libspdk_accel_iaa.so.3.0 00:16:47.351 SO libspdk_blobfs_bdev.so.6.0 00:16:47.351 SO libspdk_bdev_gpt.so.6.0 00:16:47.351 CC module/bdev/null/bdev_null.o 00:16:47.351 LIB libspdk_bdev_delay.a 00:16:47.351 CC module/bdev/malloc/bdev_malloc.o 00:16:47.351 LIB libspdk_bdev_error.a 00:16:47.351 SYMLINK libspdk_accel_iaa.so 00:16:47.351 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:47.351 CC module/bdev/nvme/bdev_nvme.o 00:16:47.351 SO libspdk_bdev_delay.so.6.0 00:16:47.351 SO libspdk_bdev_error.so.6.0 00:16:47.351 CC module/bdev/passthru/vbdev_passthru.o 00:16:47.351 SYMLINK libspdk_blobfs_bdev.so 00:16:47.351 SYMLINK libspdk_bdev_gpt.so 00:16:47.351 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:47.351 CC module/bdev/nvme/nvme_rpc.o 00:16:47.351 SYMLINK libspdk_bdev_error.so 00:16:47.351 SYMLINK libspdk_bdev_delay.so 00:16:47.351 CC module/bdev/nvme/bdev_mdns_client.o 00:16:47.351 CC module/bdev/nvme/vbdev_opal.o 00:16:47.610 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:47.610 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:47.610 CC module/bdev/null/bdev_null_rpc.o 00:16:47.610 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:47.610 CC module/bdev/raid/bdev_raid.o 00:16:47.610 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:47.610 CC module/bdev/raid/bdev_raid_rpc.o 00:16:47.868 LIB libspdk_bdev_malloc.a 00:16:47.868 LIB libspdk_bdev_null.a 00:16:47.868 SO libspdk_bdev_malloc.so.6.0 00:16:47.868 SO libspdk_bdev_null.so.6.0 00:16:47.868 CC module/bdev/raid/bdev_raid_sb.o 00:16:47.868 CC module/bdev/raid/raid0.o 00:16:47.869 LIB libspdk_bdev_passthru.a 00:16:47.869 SYMLINK libspdk_bdev_malloc.so 00:16:47.869 SO libspdk_bdev_passthru.so.6.0 00:16:47.869 SYMLINK libspdk_bdev_null.so 00:16:47.869 SYMLINK libspdk_bdev_passthru.so 00:16:47.869 CC module/bdev/raid/raid1.o 00:16:48.127 LIB libspdk_bdev_lvol.a 00:16:48.127 CC module/bdev/split/vbdev_split.o 00:16:48.127 SO libspdk_bdev_lvol.so.6.0 00:16:48.127 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:48.127 SYMLINK libspdk_bdev_lvol.so 00:16:48.127 CC module/bdev/raid/concat.o 00:16:48.127 CC module/bdev/aio/bdev_aio.o 00:16:48.127 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:48.127 CC module/bdev/aio/bdev_aio_rpc.o 00:16:48.386 CC module/bdev/raid/raid5f.o 00:16:48.386 CC module/bdev/split/vbdev_split_rpc.o 00:16:48.386 CC module/bdev/ftl/bdev_ftl.o 00:16:48.386 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:48.644 LIB libspdk_bdev_aio.a 00:16:48.644 CC module/bdev/iscsi/bdev_iscsi.o 00:16:48.644 LIB libspdk_bdev_split.a 00:16:48.644 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:48.644 LIB libspdk_bdev_zone_block.a 00:16:48.644 SO libspdk_bdev_split.so.6.0 00:16:48.644 SO libspdk_bdev_aio.so.6.0 00:16:48.644 SO libspdk_bdev_zone_block.so.6.0 00:16:48.644 SYMLINK libspdk_bdev_aio.so 00:16:48.644 SYMLINK libspdk_bdev_zone_block.so 00:16:48.644 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:48.644 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:48.644 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:48.644 SYMLINK libspdk_bdev_split.so 00:16:48.903 LIB libspdk_bdev_ftl.a 00:16:48.903 SO libspdk_bdev_ftl.so.6.0 00:16:48.903 SYMLINK libspdk_bdev_ftl.so 00:16:48.903 LIB libspdk_bdev_raid.a 00:16:49.162 LIB libspdk_bdev_iscsi.a 00:16:49.162 SO libspdk_bdev_raid.so.6.0 00:16:49.162 SO libspdk_bdev_iscsi.so.6.0 00:16:49.162 SYMLINK libspdk_bdev_iscsi.so 00:16:49.162 SYMLINK libspdk_bdev_raid.so 00:16:49.162 LIB libspdk_bdev_virtio.a 00:16:49.421 SO libspdk_bdev_virtio.so.6.0 00:16:49.421 SYMLINK libspdk_bdev_virtio.so 00:16:50.799 LIB libspdk_bdev_nvme.a 00:16:51.058 SO libspdk_bdev_nvme.so.7.1 00:16:51.058 SYMLINK libspdk_bdev_nvme.so 00:16:51.625 CC module/event/subsystems/vmd/vmd.o 00:16:51.625 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:51.625 CC module/event/subsystems/iobuf/iobuf.o 00:16:51.625 CC module/event/subsystems/keyring/keyring.o 00:16:51.625 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:51.625 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:51.625 CC module/event/subsystems/scheduler/scheduler.o 00:16:51.625 CC module/event/subsystems/sock/sock.o 00:16:51.625 CC module/event/subsystems/fsdev/fsdev.o 00:16:51.884 LIB libspdk_event_keyring.a 00:16:51.885 LIB libspdk_event_fsdev.a 00:16:51.885 LIB libspdk_event_vhost_blk.a 00:16:51.885 LIB libspdk_event_scheduler.a 00:16:51.885 LIB libspdk_event_vmd.a 00:16:51.885 LIB libspdk_event_sock.a 00:16:51.885 SO libspdk_event_keyring.so.1.0 00:16:51.885 LIB libspdk_event_iobuf.a 00:16:51.885 SO libspdk_event_scheduler.so.4.0 00:16:51.885 SO libspdk_event_fsdev.so.1.0 00:16:51.885 SO libspdk_event_vhost_blk.so.3.0 00:16:51.885 SO libspdk_event_vmd.so.6.0 00:16:51.885 SO libspdk_event_sock.so.5.0 00:16:51.885 SO libspdk_event_iobuf.so.3.0 00:16:51.885 SYMLINK libspdk_event_keyring.so 00:16:51.885 SYMLINK libspdk_event_fsdev.so 00:16:51.885 SYMLINK libspdk_event_sock.so 00:16:51.885 SYMLINK libspdk_event_vhost_blk.so 00:16:51.885 SYMLINK libspdk_event_scheduler.so 00:16:51.885 SYMLINK libspdk_event_vmd.so 00:16:51.885 SYMLINK libspdk_event_iobuf.so 00:16:52.143 CC module/event/subsystems/accel/accel.o 00:16:52.401 LIB libspdk_event_accel.a 00:16:52.401 SO libspdk_event_accel.so.6.0 00:16:52.401 SYMLINK libspdk_event_accel.so 00:16:52.659 CC module/event/subsystems/bdev/bdev.o 00:16:52.918 LIB libspdk_event_bdev.a 00:16:52.918 SO libspdk_event_bdev.so.6.0 00:16:53.177 SYMLINK libspdk_event_bdev.so 00:16:53.177 CC module/event/subsystems/nbd/nbd.o 00:16:53.177 CC module/event/subsystems/scsi/scsi.o 00:16:53.177 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:53.177 CC module/event/subsystems/ublk/ublk.o 00:16:53.177 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:53.435 LIB libspdk_event_nbd.a 00:16:53.435 SO libspdk_event_nbd.so.6.0 00:16:53.435 LIB libspdk_event_ublk.a 00:16:53.435 LIB libspdk_event_scsi.a 00:16:53.435 SO libspdk_event_ublk.so.3.0 00:16:53.435 SO libspdk_event_scsi.so.6.0 00:16:53.435 SYMLINK libspdk_event_nbd.so 00:16:53.693 SYMLINK libspdk_event_scsi.so 00:16:53.693 SYMLINK libspdk_event_ublk.so 00:16:53.693 LIB libspdk_event_nvmf.a 00:16:53.693 SO libspdk_event_nvmf.so.6.0 00:16:53.693 SYMLINK libspdk_event_nvmf.so 00:16:53.952 CC module/event/subsystems/iscsi/iscsi.o 00:16:53.952 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:53.952 LIB libspdk_event_iscsi.a 00:16:53.952 LIB libspdk_event_vhost_scsi.a 00:16:53.952 SO libspdk_event_iscsi.so.6.0 00:16:53.952 SO libspdk_event_vhost_scsi.so.3.0 00:16:54.212 SYMLINK libspdk_event_vhost_scsi.so 00:16:54.212 SYMLINK libspdk_event_iscsi.so 00:16:54.212 SO libspdk.so.6.0 00:16:54.212 SYMLINK libspdk.so 00:16:54.470 CC app/spdk_nvme_identify/identify.o 00:16:54.470 CXX app/trace/trace.o 00:16:54.470 CC app/trace_record/trace_record.o 00:16:54.470 CC app/spdk_lspci/spdk_lspci.o 00:16:54.470 CC app/spdk_nvme_perf/perf.o 00:16:54.728 CC app/nvmf_tgt/nvmf_main.o 00:16:54.728 CC app/iscsi_tgt/iscsi_tgt.o 00:16:54.728 CC app/spdk_tgt/spdk_tgt.o 00:16:54.728 CC examples/util/zipf/zipf.o 00:16:54.728 CC test/thread/poller_perf/poller_perf.o 00:16:54.728 LINK spdk_lspci 00:16:54.987 LINK nvmf_tgt 00:16:54.987 LINK zipf 00:16:54.987 LINK poller_perf 00:16:54.987 LINK iscsi_tgt 00:16:54.987 LINK spdk_tgt 00:16:54.987 LINK spdk_trace_record 00:16:54.987 CC app/spdk_nvme_discover/discovery_aer.o 00:16:54.987 LINK spdk_trace 00:16:55.245 CC examples/ioat/perf/perf.o 00:16:55.245 CC app/spdk_top/spdk_top.o 00:16:55.245 CC app/spdk_dd/spdk_dd.o 00:16:55.245 CC test/dma/test_dma/test_dma.o 00:16:55.245 LINK spdk_nvme_discover 00:16:55.245 CC test/app/bdev_svc/bdev_svc.o 00:16:55.502 CC examples/vmd/lsvmd/lsvmd.o 00:16:55.502 CC examples/idxd/perf/perf.o 00:16:55.502 LINK ioat_perf 00:16:55.502 LINK lsvmd 00:16:55.502 LINK bdev_svc 00:16:55.502 TEST_HEADER include/spdk/accel.h 00:16:55.502 TEST_HEADER include/spdk/accel_module.h 00:16:55.502 TEST_HEADER include/spdk/assert.h 00:16:55.502 TEST_HEADER include/spdk/barrier.h 00:16:55.502 TEST_HEADER include/spdk/base64.h 00:16:55.760 TEST_HEADER include/spdk/bdev.h 00:16:55.760 TEST_HEADER include/spdk/bdev_module.h 00:16:55.760 TEST_HEADER include/spdk/bdev_zone.h 00:16:55.760 TEST_HEADER include/spdk/bit_array.h 00:16:55.760 TEST_HEADER include/spdk/bit_pool.h 00:16:55.760 TEST_HEADER include/spdk/blob_bdev.h 00:16:55.760 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:55.760 TEST_HEADER include/spdk/blobfs.h 00:16:55.760 TEST_HEADER include/spdk/blob.h 00:16:55.760 TEST_HEADER include/spdk/conf.h 00:16:55.760 TEST_HEADER include/spdk/config.h 00:16:55.760 TEST_HEADER include/spdk/cpuset.h 00:16:55.760 TEST_HEADER include/spdk/crc16.h 00:16:55.760 TEST_HEADER include/spdk/crc32.h 00:16:55.760 TEST_HEADER include/spdk/crc64.h 00:16:55.760 TEST_HEADER include/spdk/dif.h 00:16:55.760 TEST_HEADER include/spdk/dma.h 00:16:55.760 TEST_HEADER include/spdk/endian.h 00:16:55.760 TEST_HEADER include/spdk/env_dpdk.h 00:16:55.760 TEST_HEADER include/spdk/env.h 00:16:55.760 TEST_HEADER include/spdk/event.h 00:16:55.760 TEST_HEADER include/spdk/fd_group.h 00:16:55.760 TEST_HEADER include/spdk/fd.h 00:16:55.760 TEST_HEADER include/spdk/file.h 00:16:55.760 TEST_HEADER include/spdk/fsdev.h 00:16:55.760 TEST_HEADER include/spdk/fsdev_module.h 00:16:55.760 TEST_HEADER include/spdk/ftl.h 00:16:55.760 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:55.760 TEST_HEADER include/spdk/gpt_spec.h 00:16:55.760 TEST_HEADER include/spdk/hexlify.h 00:16:55.760 TEST_HEADER include/spdk/histogram_data.h 00:16:55.760 TEST_HEADER include/spdk/idxd.h 00:16:55.760 TEST_HEADER include/spdk/idxd_spec.h 00:16:55.760 TEST_HEADER include/spdk/init.h 00:16:55.760 TEST_HEADER include/spdk/ioat.h 00:16:55.760 TEST_HEADER include/spdk/ioat_spec.h 00:16:55.760 TEST_HEADER include/spdk/iscsi_spec.h 00:16:55.760 TEST_HEADER include/spdk/json.h 00:16:55.760 TEST_HEADER include/spdk/jsonrpc.h 00:16:55.760 TEST_HEADER include/spdk/keyring.h 00:16:55.760 TEST_HEADER include/spdk/keyring_module.h 00:16:55.760 TEST_HEADER include/spdk/likely.h 00:16:55.760 TEST_HEADER include/spdk/log.h 00:16:55.760 LINK spdk_nvme_identify 00:16:55.760 TEST_HEADER include/spdk/lvol.h 00:16:55.760 TEST_HEADER include/spdk/md5.h 00:16:55.760 TEST_HEADER include/spdk/memory.h 00:16:55.760 CC examples/ioat/verify/verify.o 00:16:55.760 TEST_HEADER include/spdk/mmio.h 00:16:55.760 TEST_HEADER include/spdk/nbd.h 00:16:55.760 TEST_HEADER include/spdk/net.h 00:16:55.760 TEST_HEADER include/spdk/notify.h 00:16:55.760 LINK spdk_dd 00:16:55.760 TEST_HEADER include/spdk/nvme.h 00:16:55.760 TEST_HEADER include/spdk/nvme_intel.h 00:16:55.760 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:55.760 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:55.760 TEST_HEADER include/spdk/nvme_spec.h 00:16:55.760 TEST_HEADER include/spdk/nvme_zns.h 00:16:55.760 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:55.760 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:55.760 TEST_HEADER include/spdk/nvmf.h 00:16:55.760 TEST_HEADER include/spdk/nvmf_spec.h 00:16:55.760 TEST_HEADER include/spdk/nvmf_transport.h 00:16:55.760 TEST_HEADER include/spdk/opal.h 00:16:55.760 TEST_HEADER include/spdk/opal_spec.h 00:16:55.760 TEST_HEADER include/spdk/pci_ids.h 00:16:55.760 TEST_HEADER include/spdk/pipe.h 00:16:55.760 TEST_HEADER include/spdk/queue.h 00:16:55.760 TEST_HEADER include/spdk/reduce.h 00:16:55.760 TEST_HEADER include/spdk/rpc.h 00:16:55.760 TEST_HEADER include/spdk/scheduler.h 00:16:55.760 TEST_HEADER include/spdk/scsi.h 00:16:55.760 TEST_HEADER include/spdk/scsi_spec.h 00:16:55.760 TEST_HEADER include/spdk/sock.h 00:16:55.760 TEST_HEADER include/spdk/stdinc.h 00:16:55.760 LINK spdk_nvme_perf 00:16:55.760 TEST_HEADER include/spdk/string.h 00:16:55.760 TEST_HEADER include/spdk/thread.h 00:16:55.760 TEST_HEADER include/spdk/trace.h 00:16:55.760 CC examples/vmd/led/led.o 00:16:55.760 TEST_HEADER include/spdk/trace_parser.h 00:16:55.760 TEST_HEADER include/spdk/tree.h 00:16:55.760 TEST_HEADER include/spdk/ublk.h 00:16:55.760 TEST_HEADER include/spdk/util.h 00:16:55.760 TEST_HEADER include/spdk/uuid.h 00:16:55.761 TEST_HEADER include/spdk/version.h 00:16:55.761 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:55.761 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:55.761 TEST_HEADER include/spdk/vhost.h 00:16:55.761 TEST_HEADER include/spdk/vmd.h 00:16:55.761 TEST_HEADER include/spdk/xor.h 00:16:55.761 TEST_HEADER include/spdk/zipf.h 00:16:55.761 CXX test/cpp_headers/accel.o 00:16:56.018 LINK idxd_perf 00:16:56.018 LINK test_dma 00:16:56.018 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:56.018 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:56.019 LINK verify 00:16:56.019 LINK led 00:16:56.019 CXX test/cpp_headers/accel_module.o 00:16:56.019 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:56.277 CC app/fio/nvme/fio_plugin.o 00:16:56.277 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:56.277 CXX test/cpp_headers/assert.o 00:16:56.277 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:56.277 CXX test/cpp_headers/barrier.o 00:16:56.277 LINK spdk_top 00:16:56.277 CC examples/sock/hello_world/hello_sock.o 00:16:56.277 LINK interrupt_tgt 00:16:56.277 CC examples/thread/thread/thread_ex.o 00:16:56.277 CC app/vhost/vhost.o 00:16:56.535 LINK nvme_fuzz 00:16:56.535 CXX test/cpp_headers/base64.o 00:16:56.535 CXX test/cpp_headers/bdev.o 00:16:56.535 LINK vhost 00:16:56.793 LINK vhost_fuzz 00:16:56.793 LINK thread 00:16:56.793 LINK hello_sock 00:16:56.793 CXX test/cpp_headers/bdev_module.o 00:16:56.793 CC test/env/mem_callbacks/mem_callbacks.o 00:16:56.793 CC test/event/event_perf/event_perf.o 00:16:56.793 CC test/event/reactor/reactor.o 00:16:56.793 LINK spdk_nvme 00:16:57.052 CC test/app/histogram_perf/histogram_perf.o 00:16:57.052 CC test/app/jsoncat/jsoncat.o 00:16:57.052 CC app/fio/bdev/fio_plugin.o 00:16:57.052 LINK event_perf 00:16:57.052 LINK reactor 00:16:57.052 CXX test/cpp_headers/bdev_zone.o 00:16:57.052 CC examples/nvme/hello_world/hello_world.o 00:16:57.052 LINK jsoncat 00:16:57.052 LINK histogram_perf 00:16:57.313 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:57.313 CC test/event/reactor_perf/reactor_perf.o 00:16:57.313 CC test/app/stub/stub.o 00:16:57.313 CXX test/cpp_headers/bit_array.o 00:16:57.313 CC test/env/vtophys/vtophys.o 00:16:57.313 LINK hello_world 00:16:57.313 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:57.313 LINK reactor_perf 00:16:57.572 CXX test/cpp_headers/bit_pool.o 00:16:57.572 LINK stub 00:16:57.572 LINK mem_callbacks 00:16:57.572 LINK vtophys 00:16:57.572 LINK hello_fsdev 00:16:57.572 LINK env_dpdk_post_init 00:16:57.572 LINK spdk_bdev 00:16:57.572 CC examples/nvme/reconnect/reconnect.o 00:16:57.572 CXX test/cpp_headers/blob_bdev.o 00:16:57.572 CXX test/cpp_headers/blobfs_bdev.o 00:16:57.831 CC test/env/memory/memory_ut.o 00:16:57.831 CC test/event/app_repeat/app_repeat.o 00:16:57.831 CXX test/cpp_headers/blobfs.o 00:16:57.831 CC test/event/scheduler/scheduler.o 00:16:57.831 CC examples/accel/perf/accel_perf.o 00:16:57.831 CC test/env/pci/pci_ut.o 00:16:57.831 LINK app_repeat 00:16:57.831 CC examples/blob/hello_world/hello_blob.o 00:16:58.089 CC test/nvme/aer/aer.o 00:16:58.089 CXX test/cpp_headers/blob.o 00:16:58.089 LINK reconnect 00:16:58.089 CXX test/cpp_headers/conf.o 00:16:58.089 LINK scheduler 00:16:58.089 LINK iscsi_fuzz 00:16:58.348 LINK hello_blob 00:16:58.348 CXX test/cpp_headers/config.o 00:16:58.348 CXX test/cpp_headers/cpuset.o 00:16:58.348 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:58.348 CXX test/cpp_headers/crc16.o 00:16:58.348 CC examples/blob/cli/blobcli.o 00:16:58.348 LINK aer 00:16:58.348 LINK pci_ut 00:16:58.607 CC test/nvme/reset/reset.o 00:16:58.607 LINK accel_perf 00:16:58.607 CXX test/cpp_headers/crc32.o 00:16:58.607 CC test/nvme/sgl/sgl.o 00:16:58.607 CC test/nvme/e2edp/nvme_dp.o 00:16:58.607 CC test/nvme/overhead/overhead.o 00:16:58.865 CXX test/cpp_headers/crc64.o 00:16:58.865 CC test/nvme/err_injection/err_injection.o 00:16:58.865 CC test/nvme/startup/startup.o 00:16:58.865 LINK reset 00:16:58.865 CXX test/cpp_headers/dif.o 00:16:58.865 LINK sgl 00:16:58.865 LINK blobcli 00:16:58.865 LINK nvme_dp 00:16:59.124 LINK startup 00:16:59.124 LINK overhead 00:16:59.124 LINK err_injection 00:16:59.124 CXX test/cpp_headers/dma.o 00:16:59.124 LINK nvme_manage 00:16:59.124 LINK memory_ut 00:16:59.124 CC test/nvme/reserve/reserve.o 00:16:59.383 CC test/nvme/simple_copy/simple_copy.o 00:16:59.383 CC test/nvme/connect_stress/connect_stress.o 00:16:59.383 CC test/nvme/compliance/nvme_compliance.o 00:16:59.383 CC test/nvme/boot_partition/boot_partition.o 00:16:59.383 CXX test/cpp_headers/endian.o 00:16:59.383 CC test/nvme/fused_ordering/fused_ordering.o 00:16:59.383 CC examples/nvme/arbitration/arbitration.o 00:16:59.383 LINK reserve 00:16:59.383 CC examples/bdev/hello_world/hello_bdev.o 00:16:59.383 CC test/rpc_client/rpc_client_test.o 00:16:59.641 LINK boot_partition 00:16:59.641 LINK connect_stress 00:16:59.641 CXX test/cpp_headers/env_dpdk.o 00:16:59.641 LINK simple_copy 00:16:59.641 CXX test/cpp_headers/env.o 00:16:59.641 LINK fused_ordering 00:16:59.641 CXX test/cpp_headers/event.o 00:16:59.641 LINK rpc_client_test 00:16:59.641 LINK hello_bdev 00:16:59.641 LINK nvme_compliance 00:16:59.641 CXX test/cpp_headers/fd_group.o 00:16:59.899 CXX test/cpp_headers/fd.o 00:16:59.899 CC test/nvme/fdp/fdp.o 00:16:59.899 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:59.899 CXX test/cpp_headers/file.o 00:16:59.899 LINK arbitration 00:16:59.899 CXX test/cpp_headers/fsdev.o 00:16:59.899 CXX test/cpp_headers/fsdev_module.o 00:16:59.899 CXX test/cpp_headers/ftl.o 00:16:59.899 CXX test/cpp_headers/fuse_dispatcher.o 00:16:59.899 CXX test/cpp_headers/gpt_spec.o 00:16:59.899 CXX test/cpp_headers/hexlify.o 00:17:00.158 LINK doorbell_aers 00:17:00.158 CXX test/cpp_headers/histogram_data.o 00:17:00.158 CC examples/bdev/bdevperf/bdevperf.o 00:17:00.158 CC examples/nvme/hotplug/hotplug.o 00:17:00.158 CC test/nvme/cuse/cuse.o 00:17:00.158 CXX test/cpp_headers/idxd.o 00:17:00.158 CXX test/cpp_headers/idxd_spec.o 00:17:00.158 LINK fdp 00:17:00.158 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:00.416 CC examples/nvme/abort/abort.o 00:17:00.416 CC test/blobfs/mkfs/mkfs.o 00:17:00.416 CC test/accel/dif/dif.o 00:17:00.416 CXX test/cpp_headers/init.o 00:17:00.416 LINK hotplug 00:17:00.416 CXX test/cpp_headers/ioat.o 00:17:00.416 LINK cmb_copy 00:17:00.416 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:00.675 CXX test/cpp_headers/ioat_spec.o 00:17:00.675 LINK mkfs 00:17:00.675 CXX test/cpp_headers/iscsi_spec.o 00:17:00.675 CXX test/cpp_headers/json.o 00:17:00.675 LINK pmr_persistence 00:17:00.675 LINK abort 00:17:00.675 CXX test/cpp_headers/jsonrpc.o 00:17:00.675 CXX test/cpp_headers/keyring.o 00:17:00.675 CXX test/cpp_headers/keyring_module.o 00:17:00.933 CC test/lvol/esnap/esnap.o 00:17:00.933 CXX test/cpp_headers/likely.o 00:17:00.933 CXX test/cpp_headers/log.o 00:17:00.933 CXX test/cpp_headers/lvol.o 00:17:00.933 CXX test/cpp_headers/md5.o 00:17:00.933 CXX test/cpp_headers/memory.o 00:17:00.933 CXX test/cpp_headers/mmio.o 00:17:00.933 CXX test/cpp_headers/nbd.o 00:17:01.192 CXX test/cpp_headers/net.o 00:17:01.192 CXX test/cpp_headers/notify.o 00:17:01.192 LINK bdevperf 00:17:01.192 CXX test/cpp_headers/nvme.o 00:17:01.192 CXX test/cpp_headers/nvme_intel.o 00:17:01.192 CXX test/cpp_headers/nvme_ocssd.o 00:17:01.192 CXX test/cpp_headers/nvme_ocssd_spec.o 00:17:01.192 CXX test/cpp_headers/nvme_spec.o 00:17:01.192 CXX test/cpp_headers/nvme_zns.o 00:17:01.450 CXX test/cpp_headers/nvmf_cmd.o 00:17:01.450 LINK dif 00:17:01.450 CXX test/cpp_headers/nvmf_fc_spec.o 00:17:01.450 CXX test/cpp_headers/nvmf.o 00:17:01.450 CXX test/cpp_headers/nvmf_spec.o 00:17:01.450 CXX test/cpp_headers/nvmf_transport.o 00:17:01.450 CXX test/cpp_headers/opal.o 00:17:01.450 CC examples/nvmf/nvmf/nvmf.o 00:17:01.450 CXX test/cpp_headers/opal_spec.o 00:17:01.708 CXX test/cpp_headers/pci_ids.o 00:17:01.708 CXX test/cpp_headers/pipe.o 00:17:01.708 CXX test/cpp_headers/queue.o 00:17:01.708 CXX test/cpp_headers/reduce.o 00:17:01.708 CXX test/cpp_headers/rpc.o 00:17:01.708 CXX test/cpp_headers/scheduler.o 00:17:01.708 CXX test/cpp_headers/scsi.o 00:17:01.708 LINK cuse 00:17:01.708 CXX test/cpp_headers/scsi_spec.o 00:17:01.708 CC test/bdev/bdevio/bdevio.o 00:17:01.708 CXX test/cpp_headers/sock.o 00:17:01.966 CXX test/cpp_headers/stdinc.o 00:17:01.966 CXX test/cpp_headers/string.o 00:17:01.966 CXX test/cpp_headers/thread.o 00:17:01.966 LINK nvmf 00:17:01.967 CXX test/cpp_headers/trace.o 00:17:01.967 CXX test/cpp_headers/trace_parser.o 00:17:01.967 CXX test/cpp_headers/tree.o 00:17:01.967 CXX test/cpp_headers/ublk.o 00:17:01.967 CXX test/cpp_headers/util.o 00:17:01.967 CXX test/cpp_headers/uuid.o 00:17:01.967 CXX test/cpp_headers/version.o 00:17:01.967 CXX test/cpp_headers/vfio_user_pci.o 00:17:02.225 CXX test/cpp_headers/vfio_user_spec.o 00:17:02.225 CXX test/cpp_headers/vhost.o 00:17:02.225 CXX test/cpp_headers/vmd.o 00:17:02.225 CXX test/cpp_headers/xor.o 00:17:02.225 CXX test/cpp_headers/zipf.o 00:17:02.225 LINK bdevio 00:17:08.847 LINK esnap 00:17:08.847 00:17:08.847 real 1m41.917s 00:17:08.847 user 9m14.717s 00:17:08.847 sys 1m48.845s 00:17:08.847 07:14:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:08.847 ************************************ 00:17:08.847 END TEST make 00:17:08.847 ************************************ 00:17:08.847 07:14:32 make -- common/autotest_common.sh@10 -- $ set +x 00:17:08.847 07:14:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:08.847 07:14:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:08.847 07:14:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:08.847 07:14:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:08.847 07:14:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:08.847 07:14:32 -- pm/common@44 -- $ pid=5416 00:17:08.847 07:14:32 -- pm/common@50 -- $ kill -TERM 5416 00:17:08.847 07:14:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:08.847 07:14:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:08.847 07:14:32 -- pm/common@44 -- $ pid=5417 00:17:08.847 07:14:32 -- pm/common@50 -- $ kill -TERM 5417 00:17:08.847 07:14:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:17:08.847 07:14:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:08.847 07:14:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:08.847 07:14:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:17:08.847 07:14:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:08.847 07:14:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:08.847 07:14:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.847 07:14:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.847 07:14:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.847 07:14:32 -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.847 07:14:32 -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.847 07:14:32 -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.847 07:14:32 -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.847 07:14:32 -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.847 07:14:32 -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.847 07:14:32 -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.847 07:14:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.847 07:14:32 -- scripts/common.sh@344 -- # case "$op" in 00:17:08.847 07:14:32 -- scripts/common.sh@345 -- # : 1 00:17:08.847 07:14:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.847 07:14:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.847 07:14:32 -- scripts/common.sh@365 -- # decimal 1 00:17:08.847 07:14:32 -- scripts/common.sh@353 -- # local d=1 00:17:08.847 07:14:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.847 07:14:32 -- scripts/common.sh@355 -- # echo 1 00:17:08.847 07:14:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.847 07:14:32 -- scripts/common.sh@366 -- # decimal 2 00:17:08.847 07:14:32 -- scripts/common.sh@353 -- # local d=2 00:17:08.847 07:14:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.847 07:14:32 -- scripts/common.sh@355 -- # echo 2 00:17:08.847 07:14:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.847 07:14:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.847 07:14:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.847 07:14:32 -- scripts/common.sh@368 -- # return 0 00:17:08.847 07:14:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.847 07:14:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:08.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.847 --rc genhtml_branch_coverage=1 00:17:08.847 --rc genhtml_function_coverage=1 00:17:08.847 --rc genhtml_legend=1 00:17:08.847 --rc geninfo_all_blocks=1 00:17:08.847 --rc geninfo_unexecuted_blocks=1 00:17:08.847 00:17:08.847 ' 00:17:08.847 07:14:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:08.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.847 --rc genhtml_branch_coverage=1 00:17:08.847 --rc genhtml_function_coverage=1 00:17:08.847 --rc genhtml_legend=1 00:17:08.847 --rc geninfo_all_blocks=1 00:17:08.847 --rc geninfo_unexecuted_blocks=1 00:17:08.847 00:17:08.847 ' 00:17:08.847 07:14:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:08.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.847 --rc genhtml_branch_coverage=1 00:17:08.847 --rc genhtml_function_coverage=1 00:17:08.847 --rc genhtml_legend=1 00:17:08.847 --rc geninfo_all_blocks=1 00:17:08.847 --rc geninfo_unexecuted_blocks=1 00:17:08.847 00:17:08.847 ' 00:17:08.847 07:14:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:08.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.847 --rc genhtml_branch_coverage=1 00:17:08.847 --rc genhtml_function_coverage=1 00:17:08.847 --rc genhtml_legend=1 00:17:08.847 --rc geninfo_all_blocks=1 00:17:08.847 --rc geninfo_unexecuted_blocks=1 00:17:08.847 00:17:08.847 ' 00:17:08.847 07:14:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.847 07:14:32 -- nvmf/common.sh@7 -- # uname -s 00:17:08.847 07:14:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.847 07:14:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.847 07:14:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.847 07:14:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.847 07:14:32 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.847 07:14:32 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:08.847 07:14:32 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.847 07:14:32 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:08.847 07:14:32 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ad2883fb-24dd-40e1-a09a-594bd38040a9 00:17:08.847 07:14:32 -- nvmf/common.sh@16 -- # NVME_HOSTID=ad2883fb-24dd-40e1-a09a-594bd38040a9 00:17:08.847 07:14:32 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.847 07:14:32 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:08.847 07:14:32 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:17:08.847 07:14:32 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.847 07:14:32 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.847 07:14:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.847 07:14:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.847 07:14:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.847 07:14:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.847 07:14:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.848 07:14:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.848 07:14:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.848 07:14:32 -- paths/export.sh@5 -- # export PATH 00:17:08.848 07:14:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.848 07:14:32 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:08.848 07:14:32 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:08.848 07:14:32 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:08.848 07:14:32 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:08.848 07:14:32 -- nvmf/common.sh@50 -- # : 0 00:17:08.848 07:14:32 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:08.848 07:14:32 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:08.848 07:14:32 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:08.848 07:14:32 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.848 07:14:32 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.848 07:14:32 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:08.848 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:08.848 07:14:32 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:08.848 07:14:32 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:08.848 07:14:32 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:08.848 07:14:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:08.848 07:14:32 -- spdk/autotest.sh@32 -- # uname -s 00:17:08.848 07:14:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:08.848 07:14:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:17:08.848 07:14:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:08.848 07:14:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:17:08.848 07:14:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:08.848 07:14:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:08.848 07:14:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:08.848 07:14:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:17:08.848 07:14:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:17:08.848 07:14:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54524 00:17:08.848 07:14:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:08.848 07:14:32 -- pm/common@17 -- # local monitor 00:17:08.848 07:14:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:08.848 07:14:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:08.848 07:14:32 -- pm/common@21 -- # date +%s 00:17:08.848 07:14:32 -- pm/common@25 -- # sleep 1 00:17:08.848 07:14:32 -- pm/common@21 -- # date +%s 00:17:08.848 07:14:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086872 00:17:08.848 07:14:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086872 00:17:08.848 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086872_collect-vmstat.pm.log 00:17:08.848 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086872_collect-cpu-load.pm.log 00:17:09.785 07:14:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:17:09.785 07:14:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:17:09.785 07:14:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.785 07:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:09.785 07:14:33 -- spdk/autotest.sh@59 -- # create_test_list 00:17:09.785 07:14:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:17:09.785 07:14:33 -- common/autotest_common.sh@10 -- # set +x 00:17:09.785 07:14:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:17:09.785 07:14:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:17:09.785 07:14:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:17:09.785 07:14:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:17:09.785 07:14:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:17:09.785 07:14:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:17:09.785 07:14:33 -- common/autotest_common.sh@1457 -- # uname 00:17:09.785 07:14:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:17:09.785 07:14:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:17:09.785 07:14:33 -- common/autotest_common.sh@1477 -- # uname 00:17:09.785 07:14:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:17:09.785 07:14:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:17:09.785 07:14:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:17:10.044 lcov: LCOV version 1.15 00:17:10.044 07:14:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:17:28.160 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:17:28.160 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:17:46.241 07:15:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:17:46.241 07:15:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.241 07:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:46.241 07:15:09 -- spdk/autotest.sh@78 -- # rm -f 00:17:46.241 07:15:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:46.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:46.241 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:46.241 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:46.241 07:15:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:17:46.241 07:15:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:46.241 07:15:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:46.241 07:15:10 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:17:46.241 07:15:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:46.241 07:15:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:17:46.241 07:15:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:46.241 07:15:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:46.241 07:15:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:17:46.241 07:15:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:46.241 07:15:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:46.241 07:15:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:17:46.241 07:15:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:17:46.241 07:15:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:46.241 07:15:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:17:46.241 07:15:10 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:17:46.241 07:15:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:17:46.241 07:15:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:46.241 07:15:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:17:46.241 07:15:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:46.241 07:15:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:46.241 07:15:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:17:46.241 07:15:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:46.241 07:15:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:46.241 No valid GPT data, bailing 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # pt= 00:17:46.241 07:15:10 -- scripts/common.sh@395 -- # return 1 00:17:46.241 07:15:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:17:46.241 1+0 records in 00:17:46.241 1+0 records out 00:17:46.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548454 s, 191 MB/s 00:17:46.241 07:15:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:46.241 07:15:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:46.241 07:15:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:17:46.241 07:15:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:17:46.241 07:15:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:17:46.241 No valid GPT data, bailing 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # pt= 00:17:46.241 07:15:10 -- scripts/common.sh@395 -- # return 1 00:17:46.241 07:15:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:17:46.241 1+0 records in 00:17:46.241 1+0 records out 00:17:46.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543055 s, 193 MB/s 00:17:46.241 07:15:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:46.241 07:15:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:46.241 07:15:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:17:46.241 07:15:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:17:46.241 07:15:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:17:46.241 No valid GPT data, bailing 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # pt= 00:17:46.241 07:15:10 -- scripts/common.sh@395 -- # return 1 00:17:46.241 07:15:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:17:46.241 1+0 records in 00:17:46.241 1+0 records out 00:17:46.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512526 s, 205 MB/s 00:17:46.241 07:15:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:46.241 07:15:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:46.241 07:15:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:17:46.241 07:15:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:17:46.241 07:15:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:17:46.241 No valid GPT data, bailing 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:17:46.241 07:15:10 -- scripts/common.sh@394 -- # pt= 00:17:46.241 07:15:10 -- scripts/common.sh@395 -- # return 1 00:17:46.241 07:15:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:17:46.241 1+0 records in 00:17:46.241 1+0 records out 00:17:46.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049476 s, 212 MB/s 00:17:46.241 07:15:10 -- spdk/autotest.sh@105 -- # sync 00:17:46.500 07:15:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:17:46.500 07:15:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:17:46.500 07:15:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:48.429 07:15:12 -- spdk/autotest.sh@111 -- # uname -s 00:17:48.429 07:15:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:17:48.429 07:15:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:17:48.429 07:15:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:48.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:48.996 Hugepages 00:17:48.996 node hugesize free / total 00:17:48.996 node0 1048576kB 0 / 0 00:17:49.254 node0 2048kB 0 / 0 00:17:49.254 00:17:49.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:49.254 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:49.254 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:49.254 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:49.254 07:15:13 -- spdk/autotest.sh@117 -- # uname -s 00:17:49.254 07:15:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:17:49.254 07:15:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:17:49.254 07:15:13 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:50.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.189 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.189 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.189 07:15:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:17:51.183 07:15:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:17:51.183 07:15:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:17:51.183 07:15:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:17:51.183 07:15:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:17:51.183 07:15:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:51.183 07:15:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:51.183 07:15:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:51.183 07:15:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:51.183 07:15:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:51.441 07:15:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:51.441 07:15:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:51.441 07:15:15 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:51.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:51.699 Waiting for block devices as requested 00:17:51.699 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:51.957 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:51.958 07:15:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:51.958 07:15:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:17:51.958 07:15:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:51.958 07:15:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:51.958 07:15:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:51.958 07:15:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1543 -- # continue 00:17:51.958 07:15:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:51.958 07:15:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:51.958 07:15:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:17:51.958 07:15:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:51.958 07:15:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:51.958 07:15:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:51.958 07:15:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:51.958 07:15:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:51.958 07:15:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:51.958 07:15:16 -- common/autotest_common.sh@1543 -- # continue 00:17:51.958 07:15:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:51.958 07:15:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.958 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.958 07:15:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:51.958 07:15:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.958 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:17:51.958 07:15:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:52.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:52.893 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:52.893 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:52.893 07:15:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:52.893 07:15:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.893 07:15:17 -- common/autotest_common.sh@10 -- # set +x 00:17:52.893 07:15:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:52.893 07:15:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:17:52.893 07:15:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:17:52.893 07:15:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:17:52.893 07:15:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:17:52.893 07:15:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:17:52.893 07:15:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:17:52.893 07:15:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:17:52.893 07:15:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:52.893 07:15:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:52.893 07:15:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:52.893 07:15:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:52.893 07:15:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:52.893 07:15:17 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:52.893 07:15:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:52.893 07:15:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:52.893 07:15:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:17:53.152 07:15:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:53.152 07:15:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:53.152 07:15:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:53.152 07:15:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:17:53.152 07:15:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:53.152 07:15:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:53.152 07:15:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:17:53.152 07:15:17 -- common/autotest_common.sh@1572 -- # return 0 00:17:53.152 07:15:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:17:53.152 07:15:17 -- common/autotest_common.sh@1580 -- # return 0 00:17:53.152 07:15:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:17:53.152 07:15:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:17:53.152 07:15:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:53.152 07:15:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:53.152 07:15:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:17:53.152 07:15:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.152 07:15:17 -- common/autotest_common.sh@10 -- # set +x 00:17:53.152 07:15:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:17:53.152 07:15:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:53.152 07:15:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.152 07:15:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.152 07:15:17 -- common/autotest_common.sh@10 -- # set +x 00:17:53.152 ************************************ 00:17:53.152 START TEST env 00:17:53.152 ************************************ 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:53.152 * Looking for test storage... 00:17:53.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.152 07:15:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.152 07:15:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.152 07:15:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.152 07:15:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.152 07:15:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.152 07:15:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.152 07:15:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.152 07:15:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.152 07:15:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.152 07:15:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.152 07:15:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.152 07:15:17 env -- scripts/common.sh@344 -- # case "$op" in 00:17:53.152 07:15:17 env -- scripts/common.sh@345 -- # : 1 00:17:53.152 07:15:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.152 07:15:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.152 07:15:17 env -- scripts/common.sh@365 -- # decimal 1 00:17:53.152 07:15:17 env -- scripts/common.sh@353 -- # local d=1 00:17:53.152 07:15:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.152 07:15:17 env -- scripts/common.sh@355 -- # echo 1 00:17:53.152 07:15:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.152 07:15:17 env -- scripts/common.sh@366 -- # decimal 2 00:17:53.152 07:15:17 env -- scripts/common.sh@353 -- # local d=2 00:17:53.152 07:15:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.152 07:15:17 env -- scripts/common.sh@355 -- # echo 2 00:17:53.152 07:15:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.152 07:15:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.152 07:15:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.152 07:15:17 env -- scripts/common.sh@368 -- # return 0 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 07:15:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.152 07:15:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.152 07:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:17:53.152 ************************************ 00:17:53.152 START TEST env_memory 00:17:53.152 ************************************ 00:17:53.152 07:15:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:53.409 00:17:53.409 00:17:53.409 CUnit - A unit testing framework for C - Version 2.1-3 00:17:53.409 http://cunit.sourceforge.net/ 00:17:53.409 00:17:53.409 00:17:53.409 Suite: memory 00:17:53.409 Test: alloc and free memory map ...[2024-11-20 07:15:17.491352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:53.409 passed 00:17:53.409 Test: mem map translation ...[2024-11-20 07:15:17.552087] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:53.409 [2024-11-20 07:15:17.552178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:53.409 [2024-11-20 07:15:17.552277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:53.409 [2024-11-20 07:15:17.552310] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:53.409 passed 00:17:53.409 Test: mem map registration ...[2024-11-20 07:15:17.650971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:17:53.409 [2024-11-20 07:15:17.651085] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:17:53.409 passed 00:17:53.666 Test: mem map adjacent registrations ...passed 00:17:53.666 00:17:53.666 Run Summary: Type Total Ran Passed Failed Inactive 00:17:53.666 suites 1 1 n/a 0 0 00:17:53.667 tests 4 4 4 0 0 00:17:53.667 asserts 152 152 152 0 n/a 00:17:53.667 00:17:53.667 Elapsed time = 0.322 seconds 00:17:53.667 00:17:53.667 real 0m0.365s 00:17:53.667 user 0m0.333s 00:17:53.667 sys 0m0.025s 00:17:53.667 07:15:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.667 07:15:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:53.667 ************************************ 00:17:53.667 END TEST env_memory 00:17:53.667 ************************************ 00:17:53.667 07:15:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:53.667 07:15:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.667 07:15:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.667 07:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:17:53.667 ************************************ 00:17:53.667 START TEST env_vtophys 00:17:53.667 ************************************ 00:17:53.667 07:15:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:53.667 EAL: lib.eal log level changed from notice to debug 00:17:53.667 EAL: Detected lcore 0 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 1 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 2 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 3 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 4 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 5 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 6 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 7 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 8 as core 0 on socket 0 00:17:53.667 EAL: Detected lcore 9 as core 0 on socket 0 00:17:53.667 EAL: Maximum logical cores by configuration: 128 00:17:53.667 EAL: Detected CPU lcores: 10 00:17:53.667 EAL: Detected NUMA nodes: 1 00:17:53.667 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:53.667 EAL: Detected shared linkage of DPDK 00:17:53.667 EAL: No shared files mode enabled, IPC will be disabled 00:17:53.667 EAL: Selected IOVA mode 'PA' 00:17:53.667 EAL: Probing VFIO support... 00:17:53.667 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:53.667 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:53.667 EAL: Ask a virtual area of 0x2e000 bytes 00:17:53.667 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:53.667 EAL: Setting up physically contiguous memory... 00:17:53.667 EAL: Setting maximum number of open files to 524288 00:17:53.667 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:53.667 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:53.667 EAL: Ask a virtual area of 0x61000 bytes 00:17:53.667 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:53.667 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:53.667 EAL: Ask a virtual area of 0x400000000 bytes 00:17:53.667 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:53.667 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:53.667 EAL: Ask a virtual area of 0x61000 bytes 00:17:53.667 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:53.667 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:53.667 EAL: Ask a virtual area of 0x400000000 bytes 00:17:53.667 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:53.667 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:53.667 EAL: Ask a virtual area of 0x61000 bytes 00:17:53.667 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:53.667 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:53.667 EAL: Ask a virtual area of 0x400000000 bytes 00:17:53.667 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:53.667 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:53.667 EAL: Ask a virtual area of 0x61000 bytes 00:17:53.667 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:53.667 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:53.667 EAL: Ask a virtual area of 0x400000000 bytes 00:17:53.667 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:53.667 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:53.667 EAL: Hugepages will be freed exactly as allocated. 00:17:53.667 EAL: No shared files mode enabled, IPC is disabled 00:17:53.667 EAL: No shared files mode enabled, IPC is disabled 00:17:53.925 EAL: TSC frequency is ~2200000 KHz 00:17:53.925 EAL: Main lcore 0 is ready (tid=7feae10e0a40;cpuset=[0]) 00:17:53.925 EAL: Trying to obtain current memory policy. 00:17:53.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:53.925 EAL: Restoring previous memory policy: 0 00:17:53.925 EAL: request: mp_malloc_sync 00:17:53.925 EAL: No shared files mode enabled, IPC is disabled 00:17:53.925 EAL: Heap on socket 0 was expanded by 2MB 00:17:53.925 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:53.925 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:53.925 EAL: Mem event callback 'spdk:(nil)' registered 00:17:53.925 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:53.925 00:17:53.925 00:17:53.925 CUnit - A unit testing framework for C - Version 2.1-3 00:17:53.925 http://cunit.sourceforge.net/ 00:17:53.925 00:17:53.925 00:17:53.925 Suite: components_suite 00:17:54.492 Test: vtophys_malloc_test ...passed 00:17:54.492 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:54.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.492 EAL: Restoring previous memory policy: 4 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was expanded by 4MB 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was shrunk by 4MB 00:17:54.492 EAL: Trying to obtain current memory policy. 00:17:54.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.492 EAL: Restoring previous memory policy: 4 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was expanded by 6MB 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was shrunk by 6MB 00:17:54.492 EAL: Trying to obtain current memory policy. 00:17:54.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.492 EAL: Restoring previous memory policy: 4 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was expanded by 10MB 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was shrunk by 10MB 00:17:54.492 EAL: Trying to obtain current memory policy. 00:17:54.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.492 EAL: Restoring previous memory policy: 4 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was expanded by 18MB 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was shrunk by 18MB 00:17:54.492 EAL: Trying to obtain current memory policy. 00:17:54.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.492 EAL: Restoring previous memory policy: 4 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was expanded by 34MB 00:17:54.492 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.492 EAL: request: mp_malloc_sync 00:17:54.492 EAL: No shared files mode enabled, IPC is disabled 00:17:54.492 EAL: Heap on socket 0 was shrunk by 34MB 00:17:54.492 EAL: Trying to obtain current memory policy. 00:17:54.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.751 EAL: Restoring previous memory policy: 4 00:17:54.751 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.751 EAL: request: mp_malloc_sync 00:17:54.751 EAL: No shared files mode enabled, IPC is disabled 00:17:54.751 EAL: Heap on socket 0 was expanded by 66MB 00:17:54.751 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.751 EAL: request: mp_malloc_sync 00:17:54.751 EAL: No shared files mode enabled, IPC is disabled 00:17:54.751 EAL: Heap on socket 0 was shrunk by 66MB 00:17:54.751 EAL: Trying to obtain current memory policy. 00:17:54.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:54.751 EAL: Restoring previous memory policy: 4 00:17:54.751 EAL: Calling mem event callback 'spdk:(nil)' 00:17:54.751 EAL: request: mp_malloc_sync 00:17:54.751 EAL: No shared files mode enabled, IPC is disabled 00:17:54.751 EAL: Heap on socket 0 was expanded by 130MB 00:17:55.014 EAL: Calling mem event callback 'spdk:(nil)' 00:17:55.014 EAL: request: mp_malloc_sync 00:17:55.014 EAL: No shared files mode enabled, IPC is disabled 00:17:55.014 EAL: Heap on socket 0 was shrunk by 130MB 00:17:55.272 EAL: Trying to obtain current memory policy. 00:17:55.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:55.272 EAL: Restoring previous memory policy: 4 00:17:55.272 EAL: Calling mem event callback 'spdk:(nil)' 00:17:55.272 EAL: request: mp_malloc_sync 00:17:55.272 EAL: No shared files mode enabled, IPC is disabled 00:17:55.272 EAL: Heap on socket 0 was expanded by 258MB 00:17:55.840 EAL: Calling mem event callback 'spdk:(nil)' 00:17:55.840 EAL: request: mp_malloc_sync 00:17:55.840 EAL: No shared files mode enabled, IPC is disabled 00:17:55.840 EAL: Heap on socket 0 was shrunk by 258MB 00:17:56.099 EAL: Trying to obtain current memory policy. 00:17:56.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:56.358 EAL: Restoring previous memory policy: 4 00:17:56.358 EAL: Calling mem event callback 'spdk:(nil)' 00:17:56.358 EAL: request: mp_malloc_sync 00:17:56.358 EAL: No shared files mode enabled, IPC is disabled 00:17:56.358 EAL: Heap on socket 0 was expanded by 514MB 00:17:57.294 EAL: Calling mem event callback 'spdk:(nil)' 00:17:57.295 EAL: request: mp_malloc_sync 00:17:57.295 EAL: No shared files mode enabled, IPC is disabled 00:17:57.295 EAL: Heap on socket 0 was shrunk by 514MB 00:17:58.229 EAL: Trying to obtain current memory policy. 00:17:58.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:58.229 EAL: Restoring previous memory policy: 4 00:17:58.229 EAL: Calling mem event callback 'spdk:(nil)' 00:17:58.229 EAL: request: mp_malloc_sync 00:17:58.229 EAL: No shared files mode enabled, IPC is disabled 00:17:58.229 EAL: Heap on socket 0 was expanded by 1026MB 00:18:00.132 EAL: Calling mem event callback 'spdk:(nil)' 00:18:00.132 EAL: request: mp_malloc_sync 00:18:00.132 EAL: No shared files mode enabled, IPC is disabled 00:18:00.132 EAL: Heap on socket 0 was shrunk by 1026MB 00:18:02.036 passed 00:18:02.036 00:18:02.036 Run Summary: Type Total Ran Passed Failed Inactive 00:18:02.036 suites 1 1 n/a 0 0 00:18:02.036 tests 2 2 2 0 0 00:18:02.036 asserts 5698 5698 5698 0 n/a 00:18:02.036 00:18:02.036 Elapsed time = 7.736 seconds 00:18:02.036 EAL: Calling mem event callback 'spdk:(nil)' 00:18:02.036 EAL: request: mp_malloc_sync 00:18:02.036 EAL: No shared files mode enabled, IPC is disabled 00:18:02.036 EAL: Heap on socket 0 was shrunk by 2MB 00:18:02.036 EAL: No shared files mode enabled, IPC is disabled 00:18:02.036 EAL: No shared files mode enabled, IPC is disabled 00:18:02.036 EAL: No shared files mode enabled, IPC is disabled 00:18:02.036 00:18:02.036 real 0m8.091s 00:18:02.036 user 0m6.898s 00:18:02.036 sys 0m1.024s 00:18:02.036 07:15:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.036 ************************************ 00:18:02.036 END TEST env_vtophys 00:18:02.036 ************************************ 00:18:02.036 07:15:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:18:02.036 07:15:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:18:02.036 07:15:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.036 07:15:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.036 07:15:25 env -- common/autotest_common.sh@10 -- # set +x 00:18:02.036 ************************************ 00:18:02.036 START TEST env_pci 00:18:02.036 ************************************ 00:18:02.036 07:15:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:18:02.036 00:18:02.036 00:18:02.036 CUnit - A unit testing framework for C - Version 2.1-3 00:18:02.036 http://cunit.sourceforge.net/ 00:18:02.036 00:18:02.036 00:18:02.036 Suite: pci 00:18:02.036 Test: pci_hook ...[2024-11-20 07:15:26.022125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56882 has claimed it 00:18:02.036 passed 00:18:02.036 00:18:02.036 Run Summary: Type Total Ran Passed Failed Inactive 00:18:02.036 suites 1 1 n/a 0 0 00:18:02.036 tests 1 1 1 0 0 00:18:02.036 asserts 25 25 25 0 n/a 00:18:02.036 00:18:02.036 Elapsed time = 0.008 secondsEAL: Cannot find device (10000:00:01.0) 00:18:02.036 EAL: Failed to attach device on primary process 00:18:02.036 00:18:02.036 00:18:02.036 real 0m0.091s 00:18:02.036 user 0m0.040s 00:18:02.036 sys 0m0.050s 00:18:02.036 07:15:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.036 07:15:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:18:02.036 ************************************ 00:18:02.036 END TEST env_pci 00:18:02.036 ************************************ 00:18:02.036 07:15:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:18:02.036 07:15:26 env -- env/env.sh@15 -- # uname 00:18:02.036 07:15:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:18:02.036 07:15:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:18:02.036 07:15:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:02.036 07:15:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:02.036 07:15:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.036 07:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:18:02.036 ************************************ 00:18:02.036 START TEST env_dpdk_post_init 00:18:02.036 ************************************ 00:18:02.036 07:15:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:18:02.036 EAL: Detected CPU lcores: 10 00:18:02.036 EAL: Detected NUMA nodes: 1 00:18:02.036 EAL: Detected shared linkage of DPDK 00:18:02.036 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:02.036 EAL: Selected IOVA mode 'PA' 00:18:02.295 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:02.295 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:18:02.295 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:18:02.295 Starting DPDK initialization... 00:18:02.295 Starting SPDK post initialization... 00:18:02.295 SPDK NVMe probe 00:18:02.295 Attaching to 0000:00:10.0 00:18:02.295 Attaching to 0000:00:11.0 00:18:02.295 Attached to 0000:00:10.0 00:18:02.295 Attached to 0000:00:11.0 00:18:02.295 Cleaning up... 00:18:02.295 00:18:02.295 real 0m0.346s 00:18:02.295 user 0m0.123s 00:18:02.295 sys 0m0.119s 00:18:02.295 07:15:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.295 07:15:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:18:02.295 ************************************ 00:18:02.295 END TEST env_dpdk_post_init 00:18:02.295 ************************************ 00:18:02.295 07:15:26 env -- env/env.sh@26 -- # uname 00:18:02.295 07:15:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:18:02.295 07:15:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:18:02.295 07:15:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.295 07:15:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.295 07:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:18:02.295 ************************************ 00:18:02.295 START TEST env_mem_callbacks 00:18:02.295 ************************************ 00:18:02.295 07:15:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:18:02.295 EAL: Detected CPU lcores: 10 00:18:02.295 EAL: Detected NUMA nodes: 1 00:18:02.295 EAL: Detected shared linkage of DPDK 00:18:02.554 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:18:02.554 EAL: Selected IOVA mode 'PA' 00:18:02.554 TELEMETRY: No legacy callbacks, legacy socket not created 00:18:02.554 00:18:02.554 00:18:02.554 CUnit - A unit testing framework for C - Version 2.1-3 00:18:02.554 http://cunit.sourceforge.net/ 00:18:02.554 00:18:02.554 00:18:02.554 Suite: memory 00:18:02.554 Test: test ... 00:18:02.554 register 0x200000200000 2097152 00:18:02.554 malloc 3145728 00:18:02.554 register 0x200000400000 4194304 00:18:02.554 buf 0x2000004fffc0 len 3145728 PASSED 00:18:02.554 malloc 64 00:18:02.554 buf 0x2000004ffec0 len 64 PASSED 00:18:02.554 malloc 4194304 00:18:02.554 register 0x200000800000 6291456 00:18:02.554 buf 0x2000009fffc0 len 4194304 PASSED 00:18:02.554 free 0x2000004fffc0 3145728 00:18:02.554 free 0x2000004ffec0 64 00:18:02.554 unregister 0x200000400000 4194304 PASSED 00:18:02.554 free 0x2000009fffc0 4194304 00:18:02.554 unregister 0x200000800000 6291456 PASSED 00:18:02.554 malloc 8388608 00:18:02.554 register 0x200000400000 10485760 00:18:02.554 buf 0x2000005fffc0 len 8388608 PASSED 00:18:02.554 free 0x2000005fffc0 8388608 00:18:02.554 unregister 0x200000400000 10485760 PASSED 00:18:02.554 passed 00:18:02.554 00:18:02.554 Run Summary: Type Total Ran Passed Failed Inactive 00:18:02.554 suites 1 1 n/a 0 0 00:18:02.554 tests 1 1 1 0 0 00:18:02.554 asserts 15 15 15 0 n/a 00:18:02.554 00:18:02.554 Elapsed time = 0.075 seconds 00:18:02.554 00:18:02.554 real 0m0.288s 00:18:02.554 user 0m0.119s 00:18:02.554 sys 0m0.067s 00:18:02.554 07:15:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.554 07:15:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:18:02.554 ************************************ 00:18:02.554 END TEST env_mem_callbacks 00:18:02.554 ************************************ 00:18:02.814 00:18:02.814 real 0m9.649s 00:18:02.814 user 0m7.712s 00:18:02.814 sys 0m1.547s 00:18:02.814 07:15:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.814 07:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:18:02.814 ************************************ 00:18:02.814 END TEST env 00:18:02.814 ************************************ 00:18:02.814 07:15:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:18:02.814 07:15:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.814 07:15:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.814 07:15:26 -- common/autotest_common.sh@10 -- # set +x 00:18:02.814 ************************************ 00:18:02.814 START TEST rpc 00:18:02.814 ************************************ 00:18:02.814 07:15:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:18:02.814 * Looking for test storage... 00:18:02.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:18:02.814 07:15:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.814 07:15:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.814 07:15:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.814 07:15:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.814 07:15:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.814 07:15:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.814 07:15:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.814 07:15:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.814 07:15:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:02.814 07:15:27 rpc -- scripts/common.sh@345 -- # : 1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.814 07:15:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.814 07:15:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@353 -- # local d=1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.814 07:15:27 rpc -- scripts/common.sh@355 -- # echo 1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.814 07:15:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@353 -- # local d=2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.814 07:15:27 rpc -- scripts/common.sh@355 -- # echo 2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.814 07:15:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.814 07:15:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.814 07:15:27 rpc -- scripts/common.sh@368 -- # return 0 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.814 --rc genhtml_branch_coverage=1 00:18:02.814 --rc genhtml_function_coverage=1 00:18:02.814 --rc genhtml_legend=1 00:18:02.814 --rc geninfo_all_blocks=1 00:18:02.814 --rc geninfo_unexecuted_blocks=1 00:18:02.814 00:18:02.814 ' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.814 --rc genhtml_branch_coverage=1 00:18:02.814 --rc genhtml_function_coverage=1 00:18:02.814 --rc genhtml_legend=1 00:18:02.814 --rc geninfo_all_blocks=1 00:18:02.814 --rc geninfo_unexecuted_blocks=1 00:18:02.814 00:18:02.814 ' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.814 --rc genhtml_branch_coverage=1 00:18:02.814 --rc genhtml_function_coverage=1 00:18:02.814 --rc genhtml_legend=1 00:18:02.814 --rc geninfo_all_blocks=1 00:18:02.814 --rc geninfo_unexecuted_blocks=1 00:18:02.814 00:18:02.814 ' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.814 --rc genhtml_branch_coverage=1 00:18:02.814 --rc genhtml_function_coverage=1 00:18:02.814 --rc genhtml_legend=1 00:18:02.814 --rc geninfo_all_blocks=1 00:18:02.814 --rc geninfo_unexecuted_blocks=1 00:18:02.814 00:18:02.814 ' 00:18:02.814 07:15:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57009 00:18:02.814 07:15:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:02.814 07:15:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57009 00:18:02.814 07:15:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@835 -- # '[' -z 57009 ']' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.814 07:15:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.073 [2024-11-20 07:15:27.232020] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:03.073 [2024-11-20 07:15:27.232231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57009 ] 00:18:03.332 [2024-11-20 07:15:27.423348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.332 [2024-11-20 07:15:27.582466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:18:03.332 [2024-11-20 07:15:27.582554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57009' to capture a snapshot of events at runtime. 00:18:03.332 [2024-11-20 07:15:27.582577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.332 [2024-11-20 07:15:27.582616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.332 [2024-11-20 07:15:27.582646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57009 for offline analysis/debug. 00:18:03.332 [2024-11-20 07:15:27.584298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.269 07:15:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.269 07:15:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:18:04.269 07:15:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:18:04.269 07:15:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:18:04.269 07:15:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:18:04.269 07:15:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:18:04.269 07:15:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.269 07:15:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.269 07:15:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.269 ************************************ 00:18:04.269 START TEST rpc_integrity 00:18:04.269 ************************************ 00:18:04.269 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:04.269 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:04.269 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.269 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.269 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.269 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:04.269 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:04.269 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:04.269 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:04.269 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.269 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:04.604 { 00:18:04.604 "name": "Malloc0", 00:18:04.604 "aliases": [ 00:18:04.604 "8eb3200b-b3f5-4c1c-b44b-573a10d4b13d" 00:18:04.604 ], 00:18:04.604 "product_name": "Malloc disk", 00:18:04.604 "block_size": 512, 00:18:04.604 "num_blocks": 16384, 00:18:04.604 "uuid": "8eb3200b-b3f5-4c1c-b44b-573a10d4b13d", 00:18:04.604 "assigned_rate_limits": { 00:18:04.604 "rw_ios_per_sec": 0, 00:18:04.604 "rw_mbytes_per_sec": 0, 00:18:04.604 "r_mbytes_per_sec": 0, 00:18:04.604 "w_mbytes_per_sec": 0 00:18:04.604 }, 00:18:04.604 "claimed": false, 00:18:04.604 "zoned": false, 00:18:04.604 "supported_io_types": { 00:18:04.604 "read": true, 00:18:04.604 "write": true, 00:18:04.604 "unmap": true, 00:18:04.604 "flush": true, 00:18:04.604 "reset": true, 00:18:04.604 "nvme_admin": false, 00:18:04.604 "nvme_io": false, 00:18:04.604 "nvme_io_md": false, 00:18:04.604 "write_zeroes": true, 00:18:04.604 "zcopy": true, 00:18:04.604 "get_zone_info": false, 00:18:04.604 "zone_management": false, 00:18:04.604 "zone_append": false, 00:18:04.604 "compare": false, 00:18:04.604 "compare_and_write": false, 00:18:04.604 "abort": true, 00:18:04.604 "seek_hole": false, 00:18:04.604 "seek_data": false, 00:18:04.604 "copy": true, 00:18:04.604 "nvme_iov_md": false 00:18:04.604 }, 00:18:04.604 "memory_domains": [ 00:18:04.604 { 00:18:04.604 "dma_device_id": "system", 00:18:04.604 "dma_device_type": 1 00:18:04.604 }, 00:18:04.604 { 00:18:04.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.604 "dma_device_type": 2 00:18:04.604 } 00:18:04.604 ], 00:18:04.604 "driver_specific": {} 00:18:04.604 } 00:18:04.604 ]' 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 [2024-11-20 07:15:28.647899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:18:04.604 [2024-11-20 07:15:28.647986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.604 [2024-11-20 07:15:28.648022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:04.604 [2024-11-20 07:15:28.648056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.604 [2024-11-20 07:15:28.651105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.604 [2024-11-20 07:15:28.651154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:04.604 Passthru0 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.604 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:04.604 { 00:18:04.604 "name": "Malloc0", 00:18:04.604 "aliases": [ 00:18:04.604 "8eb3200b-b3f5-4c1c-b44b-573a10d4b13d" 00:18:04.604 ], 00:18:04.604 "product_name": "Malloc disk", 00:18:04.604 "block_size": 512, 00:18:04.604 "num_blocks": 16384, 00:18:04.604 "uuid": "8eb3200b-b3f5-4c1c-b44b-573a10d4b13d", 00:18:04.604 "assigned_rate_limits": { 00:18:04.604 "rw_ios_per_sec": 0, 00:18:04.604 "rw_mbytes_per_sec": 0, 00:18:04.604 "r_mbytes_per_sec": 0, 00:18:04.604 "w_mbytes_per_sec": 0 00:18:04.604 }, 00:18:04.604 "claimed": true, 00:18:04.604 "claim_type": "exclusive_write", 00:18:04.604 "zoned": false, 00:18:04.604 "supported_io_types": { 00:18:04.604 "read": true, 00:18:04.604 "write": true, 00:18:04.604 "unmap": true, 00:18:04.604 "flush": true, 00:18:04.604 "reset": true, 00:18:04.604 "nvme_admin": false, 00:18:04.604 "nvme_io": false, 00:18:04.604 "nvme_io_md": false, 00:18:04.604 "write_zeroes": true, 00:18:04.604 "zcopy": true, 00:18:04.604 "get_zone_info": false, 00:18:04.604 "zone_management": false, 00:18:04.604 "zone_append": false, 00:18:04.604 "compare": false, 00:18:04.604 "compare_and_write": false, 00:18:04.604 "abort": true, 00:18:04.604 "seek_hole": false, 00:18:04.604 "seek_data": false, 00:18:04.604 "copy": true, 00:18:04.604 "nvme_iov_md": false 00:18:04.604 }, 00:18:04.604 "memory_domains": [ 00:18:04.604 { 00:18:04.604 "dma_device_id": "system", 00:18:04.604 "dma_device_type": 1 00:18:04.604 }, 00:18:04.604 { 00:18:04.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.604 "dma_device_type": 2 00:18:04.604 } 00:18:04.604 ], 00:18:04.604 "driver_specific": {} 00:18:04.604 }, 00:18:04.604 { 00:18:04.604 "name": "Passthru0", 00:18:04.604 "aliases": [ 00:18:04.604 "29cd047a-971a-50c8-81a8-3cca7e7e3f17" 00:18:04.604 ], 00:18:04.604 "product_name": "passthru", 00:18:04.604 "block_size": 512, 00:18:04.604 "num_blocks": 16384, 00:18:04.604 "uuid": "29cd047a-971a-50c8-81a8-3cca7e7e3f17", 00:18:04.604 "assigned_rate_limits": { 00:18:04.604 "rw_ios_per_sec": 0, 00:18:04.604 "rw_mbytes_per_sec": 0, 00:18:04.605 "r_mbytes_per_sec": 0, 00:18:04.605 "w_mbytes_per_sec": 0 00:18:04.605 }, 00:18:04.605 "claimed": false, 00:18:04.605 "zoned": false, 00:18:04.605 "supported_io_types": { 00:18:04.605 "read": true, 00:18:04.605 "write": true, 00:18:04.605 "unmap": true, 00:18:04.605 "flush": true, 00:18:04.605 "reset": true, 00:18:04.605 "nvme_admin": false, 00:18:04.605 "nvme_io": false, 00:18:04.605 "nvme_io_md": false, 00:18:04.605 "write_zeroes": true, 00:18:04.605 "zcopy": true, 00:18:04.605 "get_zone_info": false, 00:18:04.605 "zone_management": false, 00:18:04.605 "zone_append": false, 00:18:04.605 "compare": false, 00:18:04.605 "compare_and_write": false, 00:18:04.605 "abort": true, 00:18:04.605 "seek_hole": false, 00:18:04.605 "seek_data": false, 00:18:04.605 "copy": true, 00:18:04.605 "nvme_iov_md": false 00:18:04.605 }, 00:18:04.605 "memory_domains": [ 00:18:04.605 { 00:18:04.605 "dma_device_id": "system", 00:18:04.605 "dma_device_type": 1 00:18:04.605 }, 00:18:04.605 { 00:18:04.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.605 "dma_device_type": 2 00:18:04.605 } 00:18:04.605 ], 00:18:04.605 "driver_specific": { 00:18:04.605 "passthru": { 00:18:04.605 "name": "Passthru0", 00:18:04.605 "base_bdev_name": "Malloc0" 00:18:04.605 } 00:18:04.605 } 00:18:04.605 } 00:18:04.605 ]' 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:04.605 07:15:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:04.605 00:18:04.605 real 0m0.336s 00:18:04.605 user 0m0.208s 00:18:04.605 sys 0m0.036s 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.605 07:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:04.605 ************************************ 00:18:04.605 END TEST rpc_integrity 00:18:04.605 ************************************ 00:18:04.605 07:15:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:18:04.605 07:15:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.605 07:15:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.605 07:15:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.605 ************************************ 00:18:04.605 START TEST rpc_plugins 00:18:04.605 ************************************ 00:18:04.605 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:18:04.605 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:18:04.605 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.605 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:18:04.878 { 00:18:04.878 "name": "Malloc1", 00:18:04.878 "aliases": [ 00:18:04.878 "2b914cfb-1569-4ed3-bc8f-69e167395753" 00:18:04.878 ], 00:18:04.878 "product_name": "Malloc disk", 00:18:04.878 "block_size": 4096, 00:18:04.878 "num_blocks": 256, 00:18:04.878 "uuid": "2b914cfb-1569-4ed3-bc8f-69e167395753", 00:18:04.878 "assigned_rate_limits": { 00:18:04.878 "rw_ios_per_sec": 0, 00:18:04.878 "rw_mbytes_per_sec": 0, 00:18:04.878 "r_mbytes_per_sec": 0, 00:18:04.878 "w_mbytes_per_sec": 0 00:18:04.878 }, 00:18:04.878 "claimed": false, 00:18:04.878 "zoned": false, 00:18:04.878 "supported_io_types": { 00:18:04.878 "read": true, 00:18:04.878 "write": true, 00:18:04.878 "unmap": true, 00:18:04.878 "flush": true, 00:18:04.878 "reset": true, 00:18:04.878 "nvme_admin": false, 00:18:04.878 "nvme_io": false, 00:18:04.878 "nvme_io_md": false, 00:18:04.878 "write_zeroes": true, 00:18:04.878 "zcopy": true, 00:18:04.878 "get_zone_info": false, 00:18:04.878 "zone_management": false, 00:18:04.878 "zone_append": false, 00:18:04.878 "compare": false, 00:18:04.878 "compare_and_write": false, 00:18:04.878 "abort": true, 00:18:04.878 "seek_hole": false, 00:18:04.878 "seek_data": false, 00:18:04.878 "copy": true, 00:18:04.878 "nvme_iov_md": false 00:18:04.878 }, 00:18:04.878 "memory_domains": [ 00:18:04.878 { 00:18:04.878 "dma_device_id": "system", 00:18:04.878 "dma_device_type": 1 00:18:04.878 }, 00:18:04.878 { 00:18:04.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.878 "dma_device_type": 2 00:18:04.878 } 00:18:04.878 ], 00:18:04.878 "driver_specific": {} 00:18:04.878 } 00:18:04.878 ]' 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:04.878 07:15:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.878 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:18:04.879 07:15:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:18:04.879 07:15:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:18:04.879 00:18:04.879 real 0m0.149s 00:18:04.879 user 0m0.095s 00:18:04.879 sys 0m0.018s 00:18:04.879 07:15:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.879 ************************************ 00:18:04.879 END TEST rpc_plugins 00:18:04.879 07:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:04.879 ************************************ 00:18:04.879 07:15:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:18:04.879 07:15:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.879 07:15:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.879 07:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.879 ************************************ 00:18:04.879 START TEST rpc_trace_cmd_test 00:18:04.879 ************************************ 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:18:04.879 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57009", 00:18:04.879 "tpoint_group_mask": "0x8", 00:18:04.879 "iscsi_conn": { 00:18:04.879 "mask": "0x2", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "scsi": { 00:18:04.879 "mask": "0x4", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "bdev": { 00:18:04.879 "mask": "0x8", 00:18:04.879 "tpoint_mask": "0xffffffffffffffff" 00:18:04.879 }, 00:18:04.879 "nvmf_rdma": { 00:18:04.879 "mask": "0x10", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "nvmf_tcp": { 00:18:04.879 "mask": "0x20", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "ftl": { 00:18:04.879 "mask": "0x40", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "blobfs": { 00:18:04.879 "mask": "0x80", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "dsa": { 00:18:04.879 "mask": "0x200", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "thread": { 00:18:04.879 "mask": "0x400", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "nvme_pcie": { 00:18:04.879 "mask": "0x800", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "iaa": { 00:18:04.879 "mask": "0x1000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "nvme_tcp": { 00:18:04.879 "mask": "0x2000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "bdev_nvme": { 00:18:04.879 "mask": "0x4000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "sock": { 00:18:04.879 "mask": "0x8000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "blob": { 00:18:04.879 "mask": "0x10000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "bdev_raid": { 00:18:04.879 "mask": "0x20000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 }, 00:18:04.879 "scheduler": { 00:18:04.879 "mask": "0x40000", 00:18:04.879 "tpoint_mask": "0x0" 00:18:04.879 } 00:18:04.879 }' 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:18:04.879 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:18:05.137 00:18:05.137 real 0m0.273s 00:18:05.137 user 0m0.238s 00:18:05.137 sys 0m0.023s 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.137 07:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.137 ************************************ 00:18:05.137 END TEST rpc_trace_cmd_test 00:18:05.137 ************************************ 00:18:05.137 07:15:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:18:05.137 07:15:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:18:05.137 07:15:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:18:05.137 07:15:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.137 07:15:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.137 07:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.137 ************************************ 00:18:05.137 START TEST rpc_daemon_integrity 00:18:05.137 ************************************ 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:05.137 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.395 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:05.395 { 00:18:05.395 "name": "Malloc2", 00:18:05.395 "aliases": [ 00:18:05.395 "a6cc538f-2664-4fa1-b55f-2ea3e7ffe31e" 00:18:05.395 ], 00:18:05.395 "product_name": "Malloc disk", 00:18:05.395 "block_size": 512, 00:18:05.395 "num_blocks": 16384, 00:18:05.395 "uuid": "a6cc538f-2664-4fa1-b55f-2ea3e7ffe31e", 00:18:05.395 "assigned_rate_limits": { 00:18:05.395 "rw_ios_per_sec": 0, 00:18:05.395 "rw_mbytes_per_sec": 0, 00:18:05.395 "r_mbytes_per_sec": 0, 00:18:05.395 "w_mbytes_per_sec": 0 00:18:05.395 }, 00:18:05.396 "claimed": false, 00:18:05.396 "zoned": false, 00:18:05.396 "supported_io_types": { 00:18:05.396 "read": true, 00:18:05.396 "write": true, 00:18:05.396 "unmap": true, 00:18:05.396 "flush": true, 00:18:05.396 "reset": true, 00:18:05.396 "nvme_admin": false, 00:18:05.396 "nvme_io": false, 00:18:05.396 "nvme_io_md": false, 00:18:05.396 "write_zeroes": true, 00:18:05.396 "zcopy": true, 00:18:05.396 "get_zone_info": false, 00:18:05.396 "zone_management": false, 00:18:05.396 "zone_append": false, 00:18:05.396 "compare": false, 00:18:05.396 "compare_and_write": false, 00:18:05.396 "abort": true, 00:18:05.396 "seek_hole": false, 00:18:05.396 "seek_data": false, 00:18:05.396 "copy": true, 00:18:05.396 "nvme_iov_md": false 00:18:05.396 }, 00:18:05.396 "memory_domains": [ 00:18:05.396 { 00:18:05.396 "dma_device_id": "system", 00:18:05.396 "dma_device_type": 1 00:18:05.396 }, 00:18:05.396 { 00:18:05.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.396 "dma_device_type": 2 00:18:05.396 } 00:18:05.396 ], 00:18:05.396 "driver_specific": {} 00:18:05.396 } 00:18:05.396 ]' 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 [2024-11-20 07:15:29.530433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:18:05.396 [2024-11-20 07:15:29.530525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.396 [2024-11-20 07:15:29.530557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:05.396 [2024-11-20 07:15:29.530576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.396 [2024-11-20 07:15:29.533523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.396 [2024-11-20 07:15:29.533571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:05.396 Passthru0 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:05.396 { 00:18:05.396 "name": "Malloc2", 00:18:05.396 "aliases": [ 00:18:05.396 "a6cc538f-2664-4fa1-b55f-2ea3e7ffe31e" 00:18:05.396 ], 00:18:05.396 "product_name": "Malloc disk", 00:18:05.396 "block_size": 512, 00:18:05.396 "num_blocks": 16384, 00:18:05.396 "uuid": "a6cc538f-2664-4fa1-b55f-2ea3e7ffe31e", 00:18:05.396 "assigned_rate_limits": { 00:18:05.396 "rw_ios_per_sec": 0, 00:18:05.396 "rw_mbytes_per_sec": 0, 00:18:05.396 "r_mbytes_per_sec": 0, 00:18:05.396 "w_mbytes_per_sec": 0 00:18:05.396 }, 00:18:05.396 "claimed": true, 00:18:05.396 "claim_type": "exclusive_write", 00:18:05.396 "zoned": false, 00:18:05.396 "supported_io_types": { 00:18:05.396 "read": true, 00:18:05.396 "write": true, 00:18:05.396 "unmap": true, 00:18:05.396 "flush": true, 00:18:05.396 "reset": true, 00:18:05.396 "nvme_admin": false, 00:18:05.396 "nvme_io": false, 00:18:05.396 "nvme_io_md": false, 00:18:05.396 "write_zeroes": true, 00:18:05.396 "zcopy": true, 00:18:05.396 "get_zone_info": false, 00:18:05.396 "zone_management": false, 00:18:05.396 "zone_append": false, 00:18:05.396 "compare": false, 00:18:05.396 "compare_and_write": false, 00:18:05.396 "abort": true, 00:18:05.396 "seek_hole": false, 00:18:05.396 "seek_data": false, 00:18:05.396 "copy": true, 00:18:05.396 "nvme_iov_md": false 00:18:05.396 }, 00:18:05.396 "memory_domains": [ 00:18:05.396 { 00:18:05.396 "dma_device_id": "system", 00:18:05.396 "dma_device_type": 1 00:18:05.396 }, 00:18:05.396 { 00:18:05.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.396 "dma_device_type": 2 00:18:05.396 } 00:18:05.396 ], 00:18:05.396 "driver_specific": {} 00:18:05.396 }, 00:18:05.396 { 00:18:05.396 "name": "Passthru0", 00:18:05.396 "aliases": [ 00:18:05.396 "9a0f3206-2689-5051-9189-b00b05e65432" 00:18:05.396 ], 00:18:05.396 "product_name": "passthru", 00:18:05.396 "block_size": 512, 00:18:05.396 "num_blocks": 16384, 00:18:05.396 "uuid": "9a0f3206-2689-5051-9189-b00b05e65432", 00:18:05.396 "assigned_rate_limits": { 00:18:05.396 "rw_ios_per_sec": 0, 00:18:05.396 "rw_mbytes_per_sec": 0, 00:18:05.396 "r_mbytes_per_sec": 0, 00:18:05.396 "w_mbytes_per_sec": 0 00:18:05.396 }, 00:18:05.396 "claimed": false, 00:18:05.396 "zoned": false, 00:18:05.396 "supported_io_types": { 00:18:05.396 "read": true, 00:18:05.396 "write": true, 00:18:05.396 "unmap": true, 00:18:05.396 "flush": true, 00:18:05.396 "reset": true, 00:18:05.396 "nvme_admin": false, 00:18:05.396 "nvme_io": false, 00:18:05.396 "nvme_io_md": false, 00:18:05.396 "write_zeroes": true, 00:18:05.396 "zcopy": true, 00:18:05.396 "get_zone_info": false, 00:18:05.396 "zone_management": false, 00:18:05.396 "zone_append": false, 00:18:05.396 "compare": false, 00:18:05.396 "compare_and_write": false, 00:18:05.396 "abort": true, 00:18:05.396 "seek_hole": false, 00:18:05.396 "seek_data": false, 00:18:05.396 "copy": true, 00:18:05.396 "nvme_iov_md": false 00:18:05.396 }, 00:18:05.396 "memory_domains": [ 00:18:05.396 { 00:18:05.396 "dma_device_id": "system", 00:18:05.396 "dma_device_type": 1 00:18:05.396 }, 00:18:05.396 { 00:18:05.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.396 "dma_device_type": 2 00:18:05.396 } 00:18:05.396 ], 00:18:05.396 "driver_specific": { 00:18:05.396 "passthru": { 00:18:05.396 "name": "Passthru0", 00:18:05.396 "base_bdev_name": "Malloc2" 00:18:05.396 } 00:18:05.396 } 00:18:05.396 } 00:18:05.396 ]' 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:05.396 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:05.655 07:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:05.655 00:18:05.655 real 0m0.338s 00:18:05.655 user 0m0.202s 00:18:05.655 sys 0m0.037s 00:18:05.655 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.655 07:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:05.655 ************************************ 00:18:05.655 END TEST rpc_daemon_integrity 00:18:05.655 ************************************ 00:18:05.656 07:15:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:05.656 07:15:29 rpc -- rpc/rpc.sh@84 -- # killprocess 57009 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 57009 ']' 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@958 -- # kill -0 57009 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@959 -- # uname 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57009 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57009' 00:18:05.656 killing process with pid 57009 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@973 -- # kill 57009 00:18:05.656 07:15:29 rpc -- common/autotest_common.sh@978 -- # wait 57009 00:18:08.189 00:18:08.189 real 0m5.144s 00:18:08.189 user 0m5.907s 00:18:08.189 sys 0m0.861s 00:18:08.189 07:15:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.189 07:15:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.189 ************************************ 00:18:08.189 END TEST rpc 00:18:08.189 ************************************ 00:18:08.189 07:15:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:18:08.189 07:15:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.189 07:15:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.189 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:18:08.189 ************************************ 00:18:08.189 START TEST skip_rpc 00:18:08.189 ************************************ 00:18:08.189 07:15:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:18:08.189 * Looking for test storage... 00:18:08.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:18:08.189 07:15:32 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.189 07:15:32 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.189 07:15:32 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.189 07:15:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.189 07:15:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.190 07:15:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.190 --rc genhtml_branch_coverage=1 00:18:08.190 --rc genhtml_function_coverage=1 00:18:08.190 --rc genhtml_legend=1 00:18:08.190 --rc geninfo_all_blocks=1 00:18:08.190 --rc geninfo_unexecuted_blocks=1 00:18:08.190 00:18:08.190 ' 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.190 --rc genhtml_branch_coverage=1 00:18:08.190 --rc genhtml_function_coverage=1 00:18:08.190 --rc genhtml_legend=1 00:18:08.190 --rc geninfo_all_blocks=1 00:18:08.190 --rc geninfo_unexecuted_blocks=1 00:18:08.190 00:18:08.190 ' 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.190 --rc genhtml_branch_coverage=1 00:18:08.190 --rc genhtml_function_coverage=1 00:18:08.190 --rc genhtml_legend=1 00:18:08.190 --rc geninfo_all_blocks=1 00:18:08.190 --rc geninfo_unexecuted_blocks=1 00:18:08.190 00:18:08.190 ' 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.190 --rc genhtml_branch_coverage=1 00:18:08.190 --rc genhtml_function_coverage=1 00:18:08.190 --rc genhtml_legend=1 00:18:08.190 --rc geninfo_all_blocks=1 00:18:08.190 --rc geninfo_unexecuted_blocks=1 00:18:08.190 00:18:08.190 ' 00:18:08.190 07:15:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:08.190 07:15:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:08.190 07:15:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.190 07:15:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.190 ************************************ 00:18:08.190 START TEST skip_rpc 00:18:08.190 ************************************ 00:18:08.190 07:15:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:18:08.190 07:15:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57239 00:18:08.190 07:15:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:18:08.190 07:15:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:08.190 07:15:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:18:08.190 [2024-11-20 07:15:32.418556] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:08.190 [2024-11-20 07:15:32.418744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57239 ] 00:18:08.449 [2024-11-20 07:15:32.596212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.449 [2024-11-20 07:15:32.724462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57239 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57239 ']' 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57239 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57239 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.720 killing process with pid 57239 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57239' 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57239 00:18:13.720 07:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57239 00:18:15.617 00:18:15.617 real 0m7.265s 00:18:15.617 user 0m6.702s 00:18:15.617 sys 0m0.449s 00:18:15.617 07:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.617 07:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.617 ************************************ 00:18:15.617 END TEST skip_rpc 00:18:15.617 ************************************ 00:18:15.617 07:15:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:18:15.617 07:15:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:15.617 07:15:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.617 07:15:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.617 ************************************ 00:18:15.617 START TEST skip_rpc_with_json 00:18:15.617 ************************************ 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57348 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57348 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57348 ']' 00:18:15.617 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.618 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.618 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.618 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.618 07:15:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:15.618 [2024-11-20 07:15:39.751559] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:15.618 [2024-11-20 07:15:39.752452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57348 ] 00:18:15.876 [2024-11-20 07:15:39.939723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.876 [2024-11-20 07:15:40.075018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:16.812 [2024-11-20 07:15:40.900271] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:18:16.812 request: 00:18:16.812 { 00:18:16.812 "trtype": "tcp", 00:18:16.812 "method": "nvmf_get_transports", 00:18:16.812 "req_id": 1 00:18:16.812 } 00:18:16.812 Got JSON-RPC error response 00:18:16.812 response: 00:18:16.812 { 00:18:16.812 "code": -19, 00:18:16.812 "message": "No such device" 00:18:16.812 } 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:16.812 [2024-11-20 07:15:40.912447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.812 07:15:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:16.812 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.812 07:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:16.812 { 00:18:16.812 "subsystems": [ 00:18:16.812 { 00:18:16.812 "subsystem": "fsdev", 00:18:16.812 "config": [ 00:18:16.812 { 00:18:16.812 "method": "fsdev_set_opts", 00:18:16.812 "params": { 00:18:16.812 "fsdev_io_pool_size": 65535, 00:18:16.812 "fsdev_io_cache_size": 256 00:18:16.812 } 00:18:16.812 } 00:18:16.812 ] 00:18:16.812 }, 00:18:16.812 { 00:18:16.812 "subsystem": "keyring", 00:18:16.812 "config": [] 00:18:16.812 }, 00:18:16.812 { 00:18:16.812 "subsystem": "iobuf", 00:18:16.812 "config": [ 00:18:16.812 { 00:18:16.812 "method": "iobuf_set_options", 00:18:16.812 "params": { 00:18:16.812 "small_pool_count": 8192, 00:18:16.812 "large_pool_count": 1024, 00:18:16.812 "small_bufsize": 8192, 00:18:16.812 "large_bufsize": 135168, 00:18:16.812 "enable_numa": false 00:18:16.812 } 00:18:16.812 } 00:18:16.812 ] 00:18:16.812 }, 00:18:16.812 { 00:18:16.812 "subsystem": "sock", 00:18:16.812 "config": [ 00:18:16.812 { 00:18:16.812 "method": "sock_set_default_impl", 00:18:16.812 "params": { 00:18:16.812 "impl_name": "posix" 00:18:16.812 } 00:18:16.812 }, 00:18:16.813 { 00:18:16.813 "method": "sock_impl_set_options", 00:18:16.813 "params": { 00:18:16.813 "impl_name": "ssl", 00:18:16.813 "recv_buf_size": 4096, 00:18:16.813 "send_buf_size": 4096, 00:18:16.813 "enable_recv_pipe": true, 00:18:16.813 "enable_quickack": false, 00:18:16.813 "enable_placement_id": 0, 00:18:16.813 "enable_zerocopy_send_server": true, 00:18:16.813 "enable_zerocopy_send_client": false, 00:18:16.813 "zerocopy_threshold": 0, 00:18:16.813 "tls_version": 0, 00:18:16.813 "enable_ktls": false 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "sock_impl_set_options", 00:18:16.813 "params": { 00:18:16.813 "impl_name": "posix", 00:18:16.813 "recv_buf_size": 2097152, 00:18:16.813 "send_buf_size": 2097152, 00:18:16.813 "enable_recv_pipe": true, 00:18:16.813 "enable_quickack": false, 00:18:16.813 "enable_placement_id": 0, 00:18:16.813 "enable_zerocopy_send_server": true, 00:18:16.813 "enable_zerocopy_send_client": false, 00:18:16.813 "zerocopy_threshold": 0, 00:18:16.813 "tls_version": 0, 00:18:16.813 "enable_ktls": false 00:18:16.813 } 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "vmd", 00:18:16.813 "config": [] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "accel", 00:18:16.813 "config": [ 00:18:16.813 { 00:18:16.813 "method": "accel_set_options", 00:18:16.813 "params": { 00:18:16.813 "small_cache_size": 128, 00:18:16.813 "large_cache_size": 16, 00:18:16.813 "task_count": 2048, 00:18:16.813 "sequence_count": 2048, 00:18:16.813 "buf_count": 2048 00:18:16.813 } 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "bdev", 00:18:16.813 "config": [ 00:18:16.813 { 00:18:16.813 "method": "bdev_set_options", 00:18:16.813 "params": { 00:18:16.813 "bdev_io_pool_size": 65535, 00:18:16.813 "bdev_io_cache_size": 256, 00:18:16.813 "bdev_auto_examine": true, 00:18:16.813 "iobuf_small_cache_size": 128, 00:18:16.813 "iobuf_large_cache_size": 16 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "bdev_raid_set_options", 00:18:16.813 "params": { 00:18:16.813 "process_window_size_kb": 1024, 00:18:16.813 "process_max_bandwidth_mb_sec": 0 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "bdev_iscsi_set_options", 00:18:16.813 "params": { 00:18:16.813 "timeout_sec": 30 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "bdev_nvme_set_options", 00:18:16.813 "params": { 00:18:16.813 "action_on_timeout": "none", 00:18:16.813 "timeout_us": 0, 00:18:16.813 "timeout_admin_us": 0, 00:18:16.813 "keep_alive_timeout_ms": 10000, 00:18:16.813 "arbitration_burst": 0, 00:18:16.813 "low_priority_weight": 0, 00:18:16.813 "medium_priority_weight": 0, 00:18:16.813 "high_priority_weight": 0, 00:18:16.813 "nvme_adminq_poll_period_us": 10000, 00:18:16.813 "nvme_ioq_poll_period_us": 0, 00:18:16.813 "io_queue_requests": 0, 00:18:16.813 "delay_cmd_submit": true, 00:18:16.813 "transport_retry_count": 4, 00:18:16.813 "bdev_retry_count": 3, 00:18:16.813 "transport_ack_timeout": 0, 00:18:16.813 "ctrlr_loss_timeout_sec": 0, 00:18:16.813 "reconnect_delay_sec": 0, 00:18:16.813 "fast_io_fail_timeout_sec": 0, 00:18:16.813 "disable_auto_failback": false, 00:18:16.813 "generate_uuids": false, 00:18:16.813 "transport_tos": 0, 00:18:16.813 "nvme_error_stat": false, 00:18:16.813 "rdma_srq_size": 0, 00:18:16.813 "io_path_stat": false, 00:18:16.813 "allow_accel_sequence": false, 00:18:16.813 "rdma_max_cq_size": 0, 00:18:16.813 "rdma_cm_event_timeout_ms": 0, 00:18:16.813 "dhchap_digests": [ 00:18:16.813 "sha256", 00:18:16.813 "sha384", 00:18:16.813 "sha512" 00:18:16.813 ], 00:18:16.813 "dhchap_dhgroups": [ 00:18:16.813 "null", 00:18:16.813 "ffdhe2048", 00:18:16.813 "ffdhe3072", 00:18:16.813 "ffdhe4096", 00:18:16.813 "ffdhe6144", 00:18:16.813 "ffdhe8192" 00:18:16.813 ] 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "bdev_nvme_set_hotplug", 00:18:16.813 "params": { 00:18:16.813 "period_us": 100000, 00:18:16.813 "enable": false 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "bdev_wait_for_examine" 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "scsi", 00:18:16.813 "config": null 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "scheduler", 00:18:16.813 "config": [ 00:18:16.813 { 00:18:16.813 "method": "framework_set_scheduler", 00:18:16.813 "params": { 00:18:16.813 "name": "static" 00:18:16.813 } 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "vhost_scsi", 00:18:16.813 "config": [] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "vhost_blk", 00:18:16.813 "config": [] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "ublk", 00:18:16.813 "config": [] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "nbd", 00:18:16.813 "config": [] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "nvmf", 00:18:16.813 "config": [ 00:18:16.813 { 00:18:16.813 "method": "nvmf_set_config", 00:18:16.813 "params": { 00:18:16.813 "discovery_filter": "match_any", 00:18:16.813 "admin_cmd_passthru": { 00:18:16.813 "identify_ctrlr": false 00:18:16.813 }, 00:18:16.813 "dhchap_digests": [ 00:18:16.813 "sha256", 00:18:16.813 "sha384", 00:18:16.813 "sha512" 00:18:16.813 ], 00:18:16.813 "dhchap_dhgroups": [ 00:18:16.813 "null", 00:18:16.813 "ffdhe2048", 00:18:16.813 "ffdhe3072", 00:18:16.813 "ffdhe4096", 00:18:16.813 "ffdhe6144", 00:18:16.813 "ffdhe8192" 00:18:16.813 ] 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "nvmf_set_max_subsystems", 00:18:16.813 "params": { 00:18:16.813 "max_subsystems": 1024 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "nvmf_set_crdt", 00:18:16.813 "params": { 00:18:16.813 "crdt1": 0, 00:18:16.813 "crdt2": 0, 00:18:16.813 "crdt3": 0 00:18:16.813 } 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "method": "nvmf_create_transport", 00:18:16.813 "params": { 00:18:16.813 "trtype": "TCP", 00:18:16.813 "max_queue_depth": 128, 00:18:16.813 "max_io_qpairs_per_ctrlr": 127, 00:18:16.813 "in_capsule_data_size": 4096, 00:18:16.813 "max_io_size": 131072, 00:18:16.813 "io_unit_size": 131072, 00:18:16.813 "max_aq_depth": 128, 00:18:16.813 "num_shared_buffers": 511, 00:18:16.813 "buf_cache_size": 4294967295, 00:18:16.813 "dif_insert_or_strip": false, 00:18:16.813 "zcopy": false, 00:18:16.813 "c2h_success": true, 00:18:16.813 "sock_priority": 0, 00:18:16.813 "abort_timeout_sec": 1, 00:18:16.813 "ack_timeout": 0, 00:18:16.813 "data_wr_pool_size": 0 00:18:16.813 } 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 }, 00:18:16.813 { 00:18:16.813 "subsystem": "iscsi", 00:18:16.813 "config": [ 00:18:16.813 { 00:18:16.813 "method": "iscsi_set_options", 00:18:16.813 "params": { 00:18:16.813 "node_base": "iqn.2016-06.io.spdk", 00:18:16.813 "max_sessions": 128, 00:18:16.813 "max_connections_per_session": 2, 00:18:16.813 "max_queue_depth": 64, 00:18:16.813 "default_time2wait": 2, 00:18:16.813 "default_time2retain": 20, 00:18:16.813 "first_burst_length": 8192, 00:18:16.813 "immediate_data": true, 00:18:16.813 "allow_duplicated_isid": false, 00:18:16.813 "error_recovery_level": 0, 00:18:16.813 "nop_timeout": 60, 00:18:16.813 "nop_in_interval": 30, 00:18:16.813 "disable_chap": false, 00:18:16.813 "require_chap": false, 00:18:16.813 "mutual_chap": false, 00:18:16.813 "chap_group": 0, 00:18:16.813 "max_large_datain_per_connection": 64, 00:18:16.813 "max_r2t_per_connection": 4, 00:18:16.813 "pdu_pool_size": 36864, 00:18:16.813 "immediate_data_pool_size": 16384, 00:18:16.813 "data_out_pool_size": 2048 00:18:16.813 } 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 } 00:18:16.813 ] 00:18:16.813 } 00:18:16.813 07:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:16.813 07:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57348 00:18:16.813 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57348 ']' 00:18:16.813 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57348 00:18:16.813 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:16.813 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.073 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57348 00:18:17.073 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.073 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.073 killing process with pid 57348 00:18:17.073 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57348' 00:18:17.073 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57348 00:18:17.073 07:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57348 00:18:19.600 07:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57397 00:18:19.600 07:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:18:19.600 07:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:24.951 07:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57397 00:18:24.951 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57397 ']' 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57397 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57397 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.952 killing process with pid 57397 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57397' 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57397 00:18:24.952 07:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57397 00:18:26.324 07:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:26.324 07:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:26.324 00:18:26.324 real 0m10.949s 00:18:26.324 user 0m10.411s 00:18:26.324 sys 0m0.996s 00:18:26.324 07:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.324 07:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:26.324 ************************************ 00:18:26.324 END TEST skip_rpc_with_json 00:18:26.324 ************************************ 00:18:26.324 07:15:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:18:26.324 07:15:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.324 07:15:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.324 07:15:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.582 ************************************ 00:18:26.582 START TEST skip_rpc_with_delay 00:18:26.582 ************************************ 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:26.583 [2024-11-20 07:15:50.750445] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.583 00:18:26.583 real 0m0.201s 00:18:26.583 user 0m0.095s 00:18:26.583 sys 0m0.104s 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.583 ************************************ 00:18:26.583 END TEST skip_rpc_with_delay 00:18:26.583 ************************************ 00:18:26.583 07:15:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:18:26.583 07:15:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:18:26.583 07:15:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:18:26.583 07:15:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:18:26.583 07:15:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.583 07:15:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.583 07:15:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.583 ************************************ 00:18:26.583 START TEST exit_on_failed_rpc_init 00:18:26.583 ************************************ 00:18:26.583 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57532 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57532 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57532 ']' 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.841 07:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:26.841 [2024-11-20 07:15:50.983175] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:26.841 [2024-11-20 07:15:50.983355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57532 ] 00:18:27.100 [2024-11-20 07:15:51.165560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.100 [2024-11-20 07:15:51.325148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:28.131 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:28.131 [2024-11-20 07:15:52.353334] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:28.131 [2024-11-20 07:15:52.353529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57550 ] 00:18:28.389 [2024-11-20 07:15:52.541413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.389 [2024-11-20 07:15:52.674232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.389 [2024-11-20 07:15:52.674363] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:28.389 [2024-11-20 07:15:52.674387] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:28.389 [2024-11-20 07:15:52.674406] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57532 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57532 ']' 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57532 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57532 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.956 killing process with pid 57532 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57532' 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57532 00:18:28.956 07:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57532 00:18:31.488 00:18:31.488 real 0m4.347s 00:18:31.488 user 0m4.821s 00:18:31.488 sys 0m0.690s 00:18:31.488 07:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.488 07:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:31.488 ************************************ 00:18:31.488 END TEST exit_on_failed_rpc_init 00:18:31.488 ************************************ 00:18:31.488 07:15:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:31.488 00:18:31.488 real 0m23.161s 00:18:31.488 user 0m22.190s 00:18:31.488 sys 0m2.461s 00:18:31.488 07:15:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.488 07:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.488 ************************************ 00:18:31.488 END TEST skip_rpc 00:18:31.488 ************************************ 00:18:31.488 07:15:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:31.488 07:15:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.488 07:15:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.488 07:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:31.488 ************************************ 00:18:31.488 START TEST rpc_client 00:18:31.488 ************************************ 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:31.488 * Looking for test storage... 00:18:31.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.488 07:15:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.488 --rc genhtml_branch_coverage=1 00:18:31.488 --rc genhtml_function_coverage=1 00:18:31.488 --rc genhtml_legend=1 00:18:31.488 --rc geninfo_all_blocks=1 00:18:31.488 --rc geninfo_unexecuted_blocks=1 00:18:31.488 00:18:31.488 ' 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.488 --rc genhtml_branch_coverage=1 00:18:31.488 --rc genhtml_function_coverage=1 00:18:31.488 --rc genhtml_legend=1 00:18:31.488 --rc geninfo_all_blocks=1 00:18:31.488 --rc geninfo_unexecuted_blocks=1 00:18:31.488 00:18:31.488 ' 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.488 --rc genhtml_branch_coverage=1 00:18:31.488 --rc genhtml_function_coverage=1 00:18:31.488 --rc genhtml_legend=1 00:18:31.488 --rc geninfo_all_blocks=1 00:18:31.488 --rc geninfo_unexecuted_blocks=1 00:18:31.488 00:18:31.488 ' 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.488 --rc genhtml_branch_coverage=1 00:18:31.488 --rc genhtml_function_coverage=1 00:18:31.488 --rc genhtml_legend=1 00:18:31.488 --rc geninfo_all_blocks=1 00:18:31.488 --rc geninfo_unexecuted_blocks=1 00:18:31.488 00:18:31.488 ' 00:18:31.488 07:15:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:18:31.488 OK 00:18:31.488 07:15:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:18:31.488 00:18:31.488 real 0m0.247s 00:18:31.488 user 0m0.141s 00:18:31.488 sys 0m0.115s 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.488 07:15:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:18:31.488 ************************************ 00:18:31.488 END TEST rpc_client 00:18:31.488 ************************************ 00:18:31.488 07:15:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:31.488 07:15:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.488 07:15:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.488 07:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:31.488 ************************************ 00:18:31.488 START TEST json_config 00:18:31.488 ************************************ 00:18:31.488 07:15:55 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:31.488 07:15:55 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.488 07:15:55 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.488 07:15:55 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.488 07:15:55 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.488 07:15:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.488 07:15:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.488 07:15:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.488 07:15:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.488 07:15:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.488 07:15:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.488 07:15:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.488 07:15:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.488 07:15:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.489 07:15:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.489 07:15:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.489 07:15:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:18:31.489 07:15:55 json_config -- scripts/common.sh@345 -- # : 1 00:18:31.489 07:15:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.489 07:15:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.489 07:15:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:18:31.489 07:15:55 json_config -- scripts/common.sh@353 -- # local d=1 00:18:31.489 07:15:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.489 07:15:55 json_config -- scripts/common.sh@355 -- # echo 1 00:18:31.489 07:15:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.489 07:15:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:18:31.489 07:15:55 json_config -- scripts/common.sh@353 -- # local d=2 00:18:31.489 07:15:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.489 07:15:55 json_config -- scripts/common.sh@355 -- # echo 2 00:18:31.489 07:15:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.489 07:15:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.489 07:15:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.489 07:15:55 json_config -- scripts/common.sh@368 -- # return 0 00:18:31.489 07:15:55 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.489 07:15:55 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.489 --rc genhtml_branch_coverage=1 00:18:31.489 --rc genhtml_function_coverage=1 00:18:31.489 --rc genhtml_legend=1 00:18:31.489 --rc geninfo_all_blocks=1 00:18:31.489 --rc geninfo_unexecuted_blocks=1 00:18:31.489 00:18:31.489 ' 00:18:31.489 07:15:55 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.489 --rc genhtml_branch_coverage=1 00:18:31.489 --rc genhtml_function_coverage=1 00:18:31.489 --rc genhtml_legend=1 00:18:31.489 --rc geninfo_all_blocks=1 00:18:31.489 --rc geninfo_unexecuted_blocks=1 00:18:31.489 00:18:31.489 ' 00:18:31.489 07:15:55 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.489 --rc genhtml_branch_coverage=1 00:18:31.489 --rc genhtml_function_coverage=1 00:18:31.489 --rc genhtml_legend=1 00:18:31.489 --rc geninfo_all_blocks=1 00:18:31.489 --rc geninfo_unexecuted_blocks=1 00:18:31.489 00:18:31.489 ' 00:18:31.489 07:15:55 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.489 --rc genhtml_branch_coverage=1 00:18:31.489 --rc genhtml_function_coverage=1 00:18:31.489 --rc genhtml_legend=1 00:18:31.489 --rc geninfo_all_blocks=1 00:18:31.489 --rc geninfo_unexecuted_blocks=1 00:18:31.489 00:18:31.489 ' 00:18:31.489 07:15:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ad2883fb-24dd-40e1-a09a-594bd38040a9 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=ad2883fb-24dd-40e1-a09a-594bd38040a9 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.489 07:15:55 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.489 07:15:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.751 07:15:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.751 07:15:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.751 07:15:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.751 07:15:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.751 07:15:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.751 07:15:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.751 07:15:55 json_config -- paths/export.sh@5 -- # export PATH 00:18:31.751 07:15:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:18:31.751 07:15:55 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:31.751 07:15:55 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:31.751 07:15:55 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@50 -- # : 0 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:31.751 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:31.751 07:15:55 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:18:31.751 WARNING: No tests are enabled so not running JSON configuration tests 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:18:31.751 07:15:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:18:31.751 00:18:31.751 real 0m0.172s 00:18:31.751 user 0m0.117s 00:18:31.751 sys 0m0.060s 00:18:31.751 07:15:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.751 07:15:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:31.751 ************************************ 00:18:31.751 END TEST json_config 00:18:31.751 ************************************ 00:18:31.751 07:15:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:31.751 07:15:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.751 07:15:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.751 07:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:31.751 ************************************ 00:18:31.751 START TEST json_config_extra_key 00:18:31.751 ************************************ 00:18:31.751 07:15:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:31.751 07:15:55 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.751 07:15:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.751 07:15:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.751 07:15:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.751 07:15:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.751 07:15:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.751 07:15:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.751 07:15:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.751 07:15:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.751 07:15:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:18:31.752 07:15:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.752 07:15:55 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.752 --rc genhtml_branch_coverage=1 00:18:31.752 --rc genhtml_function_coverage=1 00:18:31.752 --rc genhtml_legend=1 00:18:31.752 --rc geninfo_all_blocks=1 00:18:31.752 --rc geninfo_unexecuted_blocks=1 00:18:31.752 00:18:31.752 ' 00:18:31.752 07:15:55 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.752 --rc genhtml_branch_coverage=1 00:18:31.752 --rc genhtml_function_coverage=1 00:18:31.752 --rc genhtml_legend=1 00:18:31.752 --rc geninfo_all_blocks=1 00:18:31.752 --rc geninfo_unexecuted_blocks=1 00:18:31.752 00:18:31.752 ' 00:18:31.752 07:15:55 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.752 --rc genhtml_branch_coverage=1 00:18:31.752 --rc genhtml_function_coverage=1 00:18:31.752 --rc genhtml_legend=1 00:18:31.752 --rc geninfo_all_blocks=1 00:18:31.752 --rc geninfo_unexecuted_blocks=1 00:18:31.752 00:18:31.752 ' 00:18:31.752 07:15:55 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.752 --rc genhtml_branch_coverage=1 00:18:31.752 --rc genhtml_function_coverage=1 00:18:31.752 --rc genhtml_legend=1 00:18:31.752 --rc geninfo_all_blocks=1 00:18:31.752 --rc geninfo_unexecuted_blocks=1 00:18:31.752 00:18:31.752 ' 00:18:31.752 07:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ad2883fb-24dd-40e1-a09a-594bd38040a9 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=ad2883fb-24dd-40e1-a09a-594bd38040a9 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.752 07:15:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.752 07:15:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.752 07:15:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.752 07:15:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.752 07:15:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:18:31.752 07:15:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.752 07:15:55 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:31.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:31.752 07:15:56 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:18:31.752 INFO: launching applications... 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:18:31.752 07:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57760 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:31.752 Waiting for target to run... 00:18:31.752 07:15:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57760 /var/tmp/spdk_tgt.sock 00:18:31.753 07:15:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57760 ']' 00:18:31.753 07:15:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:31.753 07:15:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.753 07:15:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:31.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:31.753 07:15:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.753 07:15:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:32.021 [2024-11-20 07:15:56.135023] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:32.021 [2024-11-20 07:15:56.135209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57760 ] 00:18:32.588 [2024-11-20 07:15:56.602960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.588 [2024-11-20 07:15:56.728206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.547 00:18:33.547 INFO: shutting down applications... 00:18:33.547 07:15:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.547 07:15:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:18:33.547 07:15:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:18:33.547 07:15:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57760 ]] 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57760 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:33.547 07:15:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:33.805 07:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:33.805 07:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:33.805 07:15:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:33.805 07:15:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:34.373 07:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:34.373 07:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:34.373 07:15:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:34.373 07:15:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:34.943 07:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:34.943 07:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:34.943 07:15:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:34.943 07:15:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:35.510 07:15:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:35.510 07:15:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:35.510 07:15:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:35.510 07:15:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:35.768 07:15:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:35.768 07:15:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:35.768 07:15:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:35.768 07:15:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57760 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:18:36.415 07:16:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:18:36.415 SPDK target shutdown done 00:18:36.415 Success 00:18:36.415 07:16:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:18:36.415 00:18:36.415 real 0m4.677s 00:18:36.415 user 0m4.057s 00:18:36.415 sys 0m0.671s 00:18:36.415 07:16:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.415 07:16:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:36.415 ************************************ 00:18:36.415 END TEST json_config_extra_key 00:18:36.415 ************************************ 00:18:36.415 07:16:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:36.415 07:16:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.415 07:16:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.415 07:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:36.415 ************************************ 00:18:36.415 START TEST alias_rpc 00:18:36.415 ************************************ 00:18:36.415 07:16:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:36.415 * Looking for test storage... 00:18:36.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:18:36.415 07:16:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.415 07:16:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.415 07:16:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.674 07:16:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.674 --rc genhtml_branch_coverage=1 00:18:36.674 --rc genhtml_function_coverage=1 00:18:36.674 --rc genhtml_legend=1 00:18:36.674 --rc geninfo_all_blocks=1 00:18:36.674 --rc geninfo_unexecuted_blocks=1 00:18:36.674 00:18:36.674 ' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.674 --rc genhtml_branch_coverage=1 00:18:36.674 --rc genhtml_function_coverage=1 00:18:36.674 --rc genhtml_legend=1 00:18:36.674 --rc geninfo_all_blocks=1 00:18:36.674 --rc geninfo_unexecuted_blocks=1 00:18:36.674 00:18:36.674 ' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.674 --rc genhtml_branch_coverage=1 00:18:36.674 --rc genhtml_function_coverage=1 00:18:36.674 --rc genhtml_legend=1 00:18:36.674 --rc geninfo_all_blocks=1 00:18:36.674 --rc geninfo_unexecuted_blocks=1 00:18:36.674 00:18:36.674 ' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.674 --rc genhtml_branch_coverage=1 00:18:36.674 --rc genhtml_function_coverage=1 00:18:36.674 --rc genhtml_legend=1 00:18:36.674 --rc geninfo_all_blocks=1 00:18:36.674 --rc geninfo_unexecuted_blocks=1 00:18:36.674 00:18:36.674 ' 00:18:36.674 07:16:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:36.674 07:16:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57866 00:18:36.674 07:16:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57866 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57866 ']' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.674 07:16:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.674 07:16:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.674 [2024-11-20 07:16:00.852467] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:36.674 [2024-11-20 07:16:00.852647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57866 ] 00:18:36.934 [2024-11-20 07:16:01.039019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.934 [2024-11-20 07:16:01.199692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.869 07:16:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.869 07:16:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:37.869 07:16:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:18:38.437 07:16:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57866 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57866 ']' 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57866 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57866 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.437 killing process with pid 57866 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57866' 00:18:38.437 07:16:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 57866 00:18:38.438 07:16:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 57866 00:18:40.969 00:18:40.969 real 0m4.237s 00:18:40.969 user 0m4.498s 00:18:40.969 sys 0m0.616s 00:18:40.969 07:16:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.969 ************************************ 00:18:40.969 END TEST alias_rpc 00:18:40.969 ************************************ 00:18:40.969 07:16:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.969 07:16:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:18:40.969 07:16:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:40.969 07:16:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.969 07:16:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.970 07:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:40.970 ************************************ 00:18:40.970 START TEST spdkcli_tcp 00:18:40.970 ************************************ 00:18:40.970 07:16:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:40.970 * Looking for test storage... 00:18:40.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:40.970 07:16:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.970 07:16:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.970 07:16:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.970 07:16:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.970 --rc genhtml_branch_coverage=1 00:18:40.970 --rc genhtml_function_coverage=1 00:18:40.970 --rc genhtml_legend=1 00:18:40.970 --rc geninfo_all_blocks=1 00:18:40.970 --rc geninfo_unexecuted_blocks=1 00:18:40.970 00:18:40.970 ' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.970 --rc genhtml_branch_coverage=1 00:18:40.970 --rc genhtml_function_coverage=1 00:18:40.970 --rc genhtml_legend=1 00:18:40.970 --rc geninfo_all_blocks=1 00:18:40.970 --rc geninfo_unexecuted_blocks=1 00:18:40.970 00:18:40.970 ' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.970 --rc genhtml_branch_coverage=1 00:18:40.970 --rc genhtml_function_coverage=1 00:18:40.970 --rc genhtml_legend=1 00:18:40.970 --rc geninfo_all_blocks=1 00:18:40.970 --rc geninfo_unexecuted_blocks=1 00:18:40.970 00:18:40.970 ' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.970 --rc genhtml_branch_coverage=1 00:18:40.970 --rc genhtml_function_coverage=1 00:18:40.970 --rc genhtml_legend=1 00:18:40.970 --rc geninfo_all_blocks=1 00:18:40.970 --rc geninfo_unexecuted_blocks=1 00:18:40.970 00:18:40.970 ' 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57974 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:40.970 07:16:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57974 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57974 ']' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.970 07:16:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.970 [2024-11-20 07:16:05.131871] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:40.970 [2024-11-20 07:16:05.132026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:18:41.229 [2024-11-20 07:16:05.306945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.229 [2024-11-20 07:16:05.498264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.229 [2024-11-20 07:16:05.498274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.165 07:16:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.165 07:16:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:18:42.165 07:16:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58001 00:18:42.165 07:16:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:18:42.165 07:16:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:18:42.733 [ 00:18:42.733 "bdev_malloc_delete", 00:18:42.733 "bdev_malloc_create", 00:18:42.733 "bdev_null_resize", 00:18:42.733 "bdev_null_delete", 00:18:42.733 "bdev_null_create", 00:18:42.733 "bdev_nvme_cuse_unregister", 00:18:42.734 "bdev_nvme_cuse_register", 00:18:42.734 "bdev_opal_new_user", 00:18:42.734 "bdev_opal_set_lock_state", 00:18:42.734 "bdev_opal_delete", 00:18:42.734 "bdev_opal_get_info", 00:18:42.734 "bdev_opal_create", 00:18:42.734 "bdev_nvme_opal_revert", 00:18:42.734 "bdev_nvme_opal_init", 00:18:42.734 "bdev_nvme_send_cmd", 00:18:42.734 "bdev_nvme_set_keys", 00:18:42.734 "bdev_nvme_get_path_iostat", 00:18:42.734 "bdev_nvme_get_mdns_discovery_info", 00:18:42.734 "bdev_nvme_stop_mdns_discovery", 00:18:42.734 "bdev_nvme_start_mdns_discovery", 00:18:42.734 "bdev_nvme_set_multipath_policy", 00:18:42.734 "bdev_nvme_set_preferred_path", 00:18:42.734 "bdev_nvme_get_io_paths", 00:18:42.734 "bdev_nvme_remove_error_injection", 00:18:42.734 "bdev_nvme_add_error_injection", 00:18:42.734 "bdev_nvme_get_discovery_info", 00:18:42.734 "bdev_nvme_stop_discovery", 00:18:42.734 "bdev_nvme_start_discovery", 00:18:42.734 "bdev_nvme_get_controller_health_info", 00:18:42.734 "bdev_nvme_disable_controller", 00:18:42.734 "bdev_nvme_enable_controller", 00:18:42.734 "bdev_nvme_reset_controller", 00:18:42.734 "bdev_nvme_get_transport_statistics", 00:18:42.734 "bdev_nvme_apply_firmware", 00:18:42.734 "bdev_nvme_detach_controller", 00:18:42.734 "bdev_nvme_get_controllers", 00:18:42.734 "bdev_nvme_attach_controller", 00:18:42.734 "bdev_nvme_set_hotplug", 00:18:42.734 "bdev_nvme_set_options", 00:18:42.734 "bdev_passthru_delete", 00:18:42.734 "bdev_passthru_create", 00:18:42.734 "bdev_lvol_set_parent_bdev", 00:18:42.734 "bdev_lvol_set_parent", 00:18:42.734 "bdev_lvol_check_shallow_copy", 00:18:42.734 "bdev_lvol_start_shallow_copy", 00:18:42.734 "bdev_lvol_grow_lvstore", 00:18:42.734 "bdev_lvol_get_lvols", 00:18:42.734 "bdev_lvol_get_lvstores", 00:18:42.734 "bdev_lvol_delete", 00:18:42.734 "bdev_lvol_set_read_only", 00:18:42.734 "bdev_lvol_resize", 00:18:42.734 "bdev_lvol_decouple_parent", 00:18:42.734 "bdev_lvol_inflate", 00:18:42.734 "bdev_lvol_rename", 00:18:42.734 "bdev_lvol_clone_bdev", 00:18:42.734 "bdev_lvol_clone", 00:18:42.734 "bdev_lvol_snapshot", 00:18:42.734 "bdev_lvol_create", 00:18:42.734 "bdev_lvol_delete_lvstore", 00:18:42.734 "bdev_lvol_rename_lvstore", 00:18:42.734 "bdev_lvol_create_lvstore", 00:18:42.734 "bdev_raid_set_options", 00:18:42.734 "bdev_raid_remove_base_bdev", 00:18:42.734 "bdev_raid_add_base_bdev", 00:18:42.734 "bdev_raid_delete", 00:18:42.734 "bdev_raid_create", 00:18:42.734 "bdev_raid_get_bdevs", 00:18:42.734 "bdev_error_inject_error", 00:18:42.734 "bdev_error_delete", 00:18:42.734 "bdev_error_create", 00:18:42.734 "bdev_split_delete", 00:18:42.734 "bdev_split_create", 00:18:42.734 "bdev_delay_delete", 00:18:42.734 "bdev_delay_create", 00:18:42.734 "bdev_delay_update_latency", 00:18:42.734 "bdev_zone_block_delete", 00:18:42.734 "bdev_zone_block_create", 00:18:42.734 "blobfs_create", 00:18:42.734 "blobfs_detect", 00:18:42.734 "blobfs_set_cache_size", 00:18:42.734 "bdev_aio_delete", 00:18:42.734 "bdev_aio_rescan", 00:18:42.734 "bdev_aio_create", 00:18:42.734 "bdev_ftl_set_property", 00:18:42.734 "bdev_ftl_get_properties", 00:18:42.734 "bdev_ftl_get_stats", 00:18:42.734 "bdev_ftl_unmap", 00:18:42.734 "bdev_ftl_unload", 00:18:42.734 "bdev_ftl_delete", 00:18:42.734 "bdev_ftl_load", 00:18:42.734 "bdev_ftl_create", 00:18:42.734 "bdev_virtio_attach_controller", 00:18:42.734 "bdev_virtio_scsi_get_devices", 00:18:42.734 "bdev_virtio_detach_controller", 00:18:42.734 "bdev_virtio_blk_set_hotplug", 00:18:42.734 "bdev_iscsi_delete", 00:18:42.734 "bdev_iscsi_create", 00:18:42.734 "bdev_iscsi_set_options", 00:18:42.734 "accel_error_inject_error", 00:18:42.734 "ioat_scan_accel_module", 00:18:42.734 "dsa_scan_accel_module", 00:18:42.734 "iaa_scan_accel_module", 00:18:42.734 "keyring_file_remove_key", 00:18:42.734 "keyring_file_add_key", 00:18:42.734 "keyring_linux_set_options", 00:18:42.734 "fsdev_aio_delete", 00:18:42.734 "fsdev_aio_create", 00:18:42.734 "iscsi_get_histogram", 00:18:42.734 "iscsi_enable_histogram", 00:18:42.734 "iscsi_set_options", 00:18:42.734 "iscsi_get_auth_groups", 00:18:42.734 "iscsi_auth_group_remove_secret", 00:18:42.734 "iscsi_auth_group_add_secret", 00:18:42.734 "iscsi_delete_auth_group", 00:18:42.734 "iscsi_create_auth_group", 00:18:42.734 "iscsi_set_discovery_auth", 00:18:42.734 "iscsi_get_options", 00:18:42.734 "iscsi_target_node_request_logout", 00:18:42.734 "iscsi_target_node_set_redirect", 00:18:42.734 "iscsi_target_node_set_auth", 00:18:42.734 "iscsi_target_node_add_lun", 00:18:42.734 "iscsi_get_stats", 00:18:42.734 "iscsi_get_connections", 00:18:42.734 "iscsi_portal_group_set_auth", 00:18:42.734 "iscsi_start_portal_group", 00:18:42.734 "iscsi_delete_portal_group", 00:18:42.734 "iscsi_create_portal_group", 00:18:42.734 "iscsi_get_portal_groups", 00:18:42.734 "iscsi_delete_target_node", 00:18:42.734 "iscsi_target_node_remove_pg_ig_maps", 00:18:42.734 "iscsi_target_node_add_pg_ig_maps", 00:18:42.734 "iscsi_create_target_node", 00:18:42.734 "iscsi_get_target_nodes", 00:18:42.734 "iscsi_delete_initiator_group", 00:18:42.734 "iscsi_initiator_group_remove_initiators", 00:18:42.734 "iscsi_initiator_group_add_initiators", 00:18:42.734 "iscsi_create_initiator_group", 00:18:42.734 "iscsi_get_initiator_groups", 00:18:42.734 "nvmf_set_crdt", 00:18:42.734 "nvmf_set_config", 00:18:42.734 "nvmf_set_max_subsystems", 00:18:42.734 "nvmf_stop_mdns_prr", 00:18:42.734 "nvmf_publish_mdns_prr", 00:18:42.734 "nvmf_subsystem_get_listeners", 00:18:42.734 "nvmf_subsystem_get_qpairs", 00:18:42.734 "nvmf_subsystem_get_controllers", 00:18:42.734 "nvmf_get_stats", 00:18:42.734 "nvmf_get_transports", 00:18:42.734 "nvmf_create_transport", 00:18:42.734 "nvmf_get_targets", 00:18:42.734 "nvmf_delete_target", 00:18:42.734 "nvmf_create_target", 00:18:42.734 "nvmf_subsystem_allow_any_host", 00:18:42.734 "nvmf_subsystem_set_keys", 00:18:42.734 "nvmf_subsystem_remove_host", 00:18:42.734 "nvmf_subsystem_add_host", 00:18:42.734 "nvmf_ns_remove_host", 00:18:42.734 "nvmf_ns_add_host", 00:18:42.734 "nvmf_subsystem_remove_ns", 00:18:42.734 "nvmf_subsystem_set_ns_ana_group", 00:18:42.734 "nvmf_subsystem_add_ns", 00:18:42.734 "nvmf_subsystem_listener_set_ana_state", 00:18:42.734 "nvmf_discovery_get_referrals", 00:18:42.734 "nvmf_discovery_remove_referral", 00:18:42.734 "nvmf_discovery_add_referral", 00:18:42.734 "nvmf_subsystem_remove_listener", 00:18:42.734 "nvmf_subsystem_add_listener", 00:18:42.734 "nvmf_delete_subsystem", 00:18:42.734 "nvmf_create_subsystem", 00:18:42.734 "nvmf_get_subsystems", 00:18:42.734 "env_dpdk_get_mem_stats", 00:18:42.734 "nbd_get_disks", 00:18:42.734 "nbd_stop_disk", 00:18:42.734 "nbd_start_disk", 00:18:42.734 "ublk_recover_disk", 00:18:42.734 "ublk_get_disks", 00:18:42.734 "ublk_stop_disk", 00:18:42.734 "ublk_start_disk", 00:18:42.734 "ublk_destroy_target", 00:18:42.734 "ublk_create_target", 00:18:42.734 "virtio_blk_create_transport", 00:18:42.734 "virtio_blk_get_transports", 00:18:42.734 "vhost_controller_set_coalescing", 00:18:42.734 "vhost_get_controllers", 00:18:42.734 "vhost_delete_controller", 00:18:42.734 "vhost_create_blk_controller", 00:18:42.734 "vhost_scsi_controller_remove_target", 00:18:42.734 "vhost_scsi_controller_add_target", 00:18:42.734 "vhost_start_scsi_controller", 00:18:42.734 "vhost_create_scsi_controller", 00:18:42.734 "thread_set_cpumask", 00:18:42.734 "scheduler_set_options", 00:18:42.734 "framework_get_governor", 00:18:42.734 "framework_get_scheduler", 00:18:42.734 "framework_set_scheduler", 00:18:42.734 "framework_get_reactors", 00:18:42.734 "thread_get_io_channels", 00:18:42.734 "thread_get_pollers", 00:18:42.734 "thread_get_stats", 00:18:42.734 "framework_monitor_context_switch", 00:18:42.734 "spdk_kill_instance", 00:18:42.734 "log_enable_timestamps", 00:18:42.734 "log_get_flags", 00:18:42.734 "log_clear_flag", 00:18:42.734 "log_set_flag", 00:18:42.734 "log_get_level", 00:18:42.734 "log_set_level", 00:18:42.734 "log_get_print_level", 00:18:42.734 "log_set_print_level", 00:18:42.734 "framework_enable_cpumask_locks", 00:18:42.734 "framework_disable_cpumask_locks", 00:18:42.734 "framework_wait_init", 00:18:42.734 "framework_start_init", 00:18:42.734 "scsi_get_devices", 00:18:42.734 "bdev_get_histogram", 00:18:42.734 "bdev_enable_histogram", 00:18:42.734 "bdev_set_qos_limit", 00:18:42.734 "bdev_set_qd_sampling_period", 00:18:42.734 "bdev_get_bdevs", 00:18:42.734 "bdev_reset_iostat", 00:18:42.734 "bdev_get_iostat", 00:18:42.734 "bdev_examine", 00:18:42.734 "bdev_wait_for_examine", 00:18:42.734 "bdev_set_options", 00:18:42.734 "accel_get_stats", 00:18:42.734 "accel_set_options", 00:18:42.734 "accel_set_driver", 00:18:42.734 "accel_crypto_key_destroy", 00:18:42.734 "accel_crypto_keys_get", 00:18:42.734 "accel_crypto_key_create", 00:18:42.734 "accel_assign_opc", 00:18:42.734 "accel_get_module_info", 00:18:42.734 "accel_get_opc_assignments", 00:18:42.734 "vmd_rescan", 00:18:42.734 "vmd_remove_device", 00:18:42.734 "vmd_enable", 00:18:42.734 "sock_get_default_impl", 00:18:42.735 "sock_set_default_impl", 00:18:42.735 "sock_impl_set_options", 00:18:42.735 "sock_impl_get_options", 00:18:42.735 "iobuf_get_stats", 00:18:42.735 "iobuf_set_options", 00:18:42.735 "keyring_get_keys", 00:18:42.735 "framework_get_pci_devices", 00:18:42.735 "framework_get_config", 00:18:42.735 "framework_get_subsystems", 00:18:42.735 "fsdev_set_opts", 00:18:42.735 "fsdev_get_opts", 00:18:42.735 "trace_get_info", 00:18:42.735 "trace_get_tpoint_group_mask", 00:18:42.735 "trace_disable_tpoint_group", 00:18:42.735 "trace_enable_tpoint_group", 00:18:42.735 "trace_clear_tpoint_mask", 00:18:42.735 "trace_set_tpoint_mask", 00:18:42.735 "notify_get_notifications", 00:18:42.735 "notify_get_types", 00:18:42.735 "spdk_get_version", 00:18:42.735 "rpc_get_methods" 00:18:42.735 ] 00:18:42.735 07:16:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.735 07:16:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:42.735 07:16:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57974 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57974 ']' 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57974 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57974 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.735 killing process with pid 57974 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57974' 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57974 00:18:42.735 07:16:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57974 00:18:45.292 00:18:45.292 real 0m4.344s 00:18:45.292 user 0m7.865s 00:18:45.292 sys 0m0.704s 00:18:45.292 07:16:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.292 ************************************ 00:18:45.292 END TEST spdkcli_tcp 00:18:45.292 ************************************ 00:18:45.292 07:16:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.292 07:16:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:45.292 07:16:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:45.292 07:16:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.292 07:16:09 -- common/autotest_common.sh@10 -- # set +x 00:18:45.292 ************************************ 00:18:45.292 START TEST dpdk_mem_utility 00:18:45.292 ************************************ 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:45.292 * Looking for test storage... 00:18:45.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.292 07:16:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.292 --rc genhtml_branch_coverage=1 00:18:45.292 --rc genhtml_function_coverage=1 00:18:45.292 --rc genhtml_legend=1 00:18:45.292 --rc geninfo_all_blocks=1 00:18:45.292 --rc geninfo_unexecuted_blocks=1 00:18:45.292 00:18:45.292 ' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.292 --rc genhtml_branch_coverage=1 00:18:45.292 --rc genhtml_function_coverage=1 00:18:45.292 --rc genhtml_legend=1 00:18:45.292 --rc geninfo_all_blocks=1 00:18:45.292 --rc geninfo_unexecuted_blocks=1 00:18:45.292 00:18:45.292 ' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.292 --rc genhtml_branch_coverage=1 00:18:45.292 --rc genhtml_function_coverage=1 00:18:45.292 --rc genhtml_legend=1 00:18:45.292 --rc geninfo_all_blocks=1 00:18:45.292 --rc geninfo_unexecuted_blocks=1 00:18:45.292 00:18:45.292 ' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.292 --rc genhtml_branch_coverage=1 00:18:45.292 --rc genhtml_function_coverage=1 00:18:45.292 --rc genhtml_legend=1 00:18:45.292 --rc geninfo_all_blocks=1 00:18:45.292 --rc geninfo_unexecuted_blocks=1 00:18:45.292 00:18:45.292 ' 00:18:45.292 07:16:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:45.292 07:16:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58106 00:18:45.292 07:16:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.292 07:16:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58106 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58106 ']' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.292 07:16:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:45.292 [2024-11-20 07:16:09.555756] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:45.292 [2024-11-20 07:16:09.555956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:18:45.561 [2024-11-20 07:16:09.743256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.821 [2024-11-20 07:16:09.883723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.760 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.760 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:18:46.760 07:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:18:46.760 07:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:18:46.760 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.760 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:46.760 { 00:18:46.760 "filename": "/tmp/spdk_mem_dump.txt" 00:18:46.760 } 00:18:46.760 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.760 07:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:46.760 DPDK memory size 816.000000 MiB in 1 heap(s) 00:18:46.760 1 heaps totaling size 816.000000 MiB 00:18:46.760 size: 816.000000 MiB heap id: 0 00:18:46.760 end heaps---------- 00:18:46.760 9 mempools totaling size 595.772034 MiB 00:18:46.760 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:18:46.760 size: 158.602051 MiB name: PDU_data_out_Pool 00:18:46.760 size: 92.545471 MiB name: bdev_io_58106 00:18:46.760 size: 50.003479 MiB name: msgpool_58106 00:18:46.760 size: 36.509338 MiB name: fsdev_io_58106 00:18:46.760 size: 21.763794 MiB name: PDU_Pool 00:18:46.760 size: 19.513306 MiB name: SCSI_TASK_Pool 00:18:46.760 size: 4.133484 MiB name: evtpool_58106 00:18:46.760 size: 0.026123 MiB name: Session_Pool 00:18:46.760 end mempools------- 00:18:46.760 6 memzones totaling size 4.142822 MiB 00:18:46.760 size: 1.000366 MiB name: RG_ring_0_58106 00:18:46.760 size: 1.000366 MiB name: RG_ring_1_58106 00:18:46.760 size: 1.000366 MiB name: RG_ring_4_58106 00:18:46.760 size: 1.000366 MiB name: RG_ring_5_58106 00:18:46.760 size: 0.125366 MiB name: RG_ring_2_58106 00:18:46.760 size: 0.015991 MiB name: RG_ring_3_58106 00:18:46.760 end memzones------- 00:18:46.760 07:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:18:46.761 heap id: 0 total size: 816.000000 MiB number of busy elements: 312 number of free elements: 18 00:18:46.761 list of free elements. size: 16.792114 MiB 00:18:46.761 element at address: 0x200006400000 with size: 1.995972 MiB 00:18:46.761 element at address: 0x20000a600000 with size: 1.995972 MiB 00:18:46.761 element at address: 0x200003e00000 with size: 1.991028 MiB 00:18:46.761 element at address: 0x200018d00040 with size: 0.999939 MiB 00:18:46.761 element at address: 0x200019100040 with size: 0.999939 MiB 00:18:46.761 element at address: 0x200019200000 with size: 0.999084 MiB 00:18:46.761 element at address: 0x200031e00000 with size: 0.994324 MiB 00:18:46.761 element at address: 0x200000400000 with size: 0.992004 MiB 00:18:46.761 element at address: 0x200018a00000 with size: 0.959656 MiB 00:18:46.761 element at address: 0x200019500040 with size: 0.936401 MiB 00:18:46.761 element at address: 0x200000200000 with size: 0.716980 MiB 00:18:46.761 element at address: 0x20001ac00000 with size: 0.562439 MiB 00:18:46.761 element at address: 0x200000c00000 with size: 0.490173 MiB 00:18:46.761 element at address: 0x200018e00000 with size: 0.487976 MiB 00:18:46.761 element at address: 0x200019600000 with size: 0.485413 MiB 00:18:46.761 element at address: 0x200012c00000 with size: 0.443481 MiB 00:18:46.761 element at address: 0x200028000000 with size: 0.390442 MiB 00:18:46.761 element at address: 0x200000800000 with size: 0.350891 MiB 00:18:46.761 list of standard malloc elements. size: 199.286987 MiB 00:18:46.761 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:18:46.761 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:18:46.761 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:18:46.761 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:18:46.761 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:18:46.761 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:18:46.761 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:18:46.761 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:18:46.761 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:18:46.761 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:18:46.761 element at address: 0x200012bff040 with size: 0.000305 MiB 00:18:46.761 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:18:46.761 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200000cff000 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff180 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff280 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff380 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff480 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff580 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff680 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff780 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff880 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bff980 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71880 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71980 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c72080 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012c72180 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:18:46.761 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:18:46.762 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:18:46.762 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200028063f40 with size: 0.000244 MiB 00:18:46.762 element at address: 0x200028064040 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806af80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b080 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b180 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b280 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b380 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b480 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b580 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b680 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b780 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b880 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806b980 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806be80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c080 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c180 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c280 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c380 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c480 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c580 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c680 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c780 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c880 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806c980 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d080 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d180 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d280 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d380 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d480 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d580 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d680 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d780 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d880 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806d980 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806da80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806db80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:18:46.762 element at address: 0x20002806de80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806df80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e080 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e180 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e280 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e380 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e480 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e580 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e680 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e780 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e880 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806e980 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f080 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f180 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f280 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f380 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f480 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f580 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f680 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f780 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f880 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806f980 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:18:46.763 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:18:46.763 list of memzone associated elements. size: 599.920898 MiB 00:18:46.763 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:18:46.763 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:18:46.763 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:18:46.763 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:18:46.763 element at address: 0x200012df4740 with size: 92.045105 MiB 00:18:46.763 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58106_0 00:18:46.763 element at address: 0x200000dff340 with size: 48.003113 MiB 00:18:46.763 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58106_0 00:18:46.763 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:18:46.763 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58106_0 00:18:46.763 element at address: 0x2000197be900 with size: 20.255615 MiB 00:18:46.763 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:18:46.763 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:18:46.763 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:18:46.763 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:18:46.763 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58106_0 00:18:46.763 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:18:46.763 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58106 00:18:46.763 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:18:46.763 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58106 00:18:46.763 element at address: 0x200018efde00 with size: 1.008179 MiB 00:18:46.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:18:46.763 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:18:46.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:18:46.763 element at address: 0x200018afde00 with size: 1.008179 MiB 00:18:46.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:18:46.763 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:18:46.763 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:18:46.763 element at address: 0x200000cff100 with size: 1.000549 MiB 00:18:46.763 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58106 00:18:46.763 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:18:46.763 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58106 00:18:46.763 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:18:46.763 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58106 00:18:46.763 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:18:46.763 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58106 00:18:46.763 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:18:46.763 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58106 00:18:46.763 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:18:46.763 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58106 00:18:46.763 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:18:46.763 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:18:46.763 element at address: 0x200012c72280 with size: 0.500549 MiB 00:18:46.763 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:18:46.763 element at address: 0x20001967c440 with size: 0.250549 MiB 00:18:46.763 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:18:46.763 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:18:46.763 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58106 00:18:46.763 element at address: 0x20000085df80 with size: 0.125549 MiB 00:18:46.763 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58106 00:18:46.763 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:18:46.763 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:18:46.763 element at address: 0x200028064140 with size: 0.023804 MiB 00:18:46.763 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:18:46.763 element at address: 0x200000859d40 with size: 0.016174 MiB 00:18:46.763 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58106 00:18:46.763 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:18:46.763 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:18:46.763 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:18:46.763 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58106 00:18:46.763 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:18:46.763 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58106 00:18:46.763 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:18:46.763 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58106 00:18:46.763 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:18:46.763 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:18:46.763 07:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:18:46.763 07:16:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58106 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58106 ']' 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58106 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58106 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.763 killing process with pid 58106 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58106' 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58106 00:18:46.763 07:16:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58106 00:18:49.297 00:18:49.297 real 0m4.011s 00:18:49.297 user 0m4.009s 00:18:49.297 sys 0m0.642s 00:18:49.297 07:16:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.297 07:16:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:49.297 ************************************ 00:18:49.297 END TEST dpdk_mem_utility 00:18:49.297 ************************************ 00:18:49.297 07:16:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:49.297 07:16:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.297 07:16:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.297 07:16:13 -- common/autotest_common.sh@10 -- # set +x 00:18:49.297 ************************************ 00:18:49.297 START TEST event 00:18:49.297 ************************************ 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:49.297 * Looking for test storage... 00:18:49.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1693 -- # lcov --version 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:49.297 07:16:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.297 07:16:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.297 07:16:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.297 07:16:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.297 07:16:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.297 07:16:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.297 07:16:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.297 07:16:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.297 07:16:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.297 07:16:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.297 07:16:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.297 07:16:13 event -- scripts/common.sh@344 -- # case "$op" in 00:18:49.297 07:16:13 event -- scripts/common.sh@345 -- # : 1 00:18:49.297 07:16:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.297 07:16:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.297 07:16:13 event -- scripts/common.sh@365 -- # decimal 1 00:18:49.297 07:16:13 event -- scripts/common.sh@353 -- # local d=1 00:18:49.297 07:16:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.297 07:16:13 event -- scripts/common.sh@355 -- # echo 1 00:18:49.297 07:16:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.297 07:16:13 event -- scripts/common.sh@366 -- # decimal 2 00:18:49.297 07:16:13 event -- scripts/common.sh@353 -- # local d=2 00:18:49.297 07:16:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.297 07:16:13 event -- scripts/common.sh@355 -- # echo 2 00:18:49.297 07:16:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.297 07:16:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.297 07:16:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.297 07:16:13 event -- scripts/common.sh@368 -- # return 0 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.297 --rc genhtml_branch_coverage=1 00:18:49.297 --rc genhtml_function_coverage=1 00:18:49.297 --rc genhtml_legend=1 00:18:49.297 --rc geninfo_all_blocks=1 00:18:49.297 --rc geninfo_unexecuted_blocks=1 00:18:49.297 00:18:49.297 ' 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.297 --rc genhtml_branch_coverage=1 00:18:49.297 --rc genhtml_function_coverage=1 00:18:49.297 --rc genhtml_legend=1 00:18:49.297 --rc geninfo_all_blocks=1 00:18:49.297 --rc geninfo_unexecuted_blocks=1 00:18:49.297 00:18:49.297 ' 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.297 --rc genhtml_branch_coverage=1 00:18:49.297 --rc genhtml_function_coverage=1 00:18:49.297 --rc genhtml_legend=1 00:18:49.297 --rc geninfo_all_blocks=1 00:18:49.297 --rc geninfo_unexecuted_blocks=1 00:18:49.297 00:18:49.297 ' 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.297 --rc genhtml_branch_coverage=1 00:18:49.297 --rc genhtml_function_coverage=1 00:18:49.297 --rc genhtml_legend=1 00:18:49.297 --rc geninfo_all_blocks=1 00:18:49.297 --rc geninfo_unexecuted_blocks=1 00:18:49.297 00:18:49.297 ' 00:18:49.297 07:16:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:49.297 07:16:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:18:49.297 07:16:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:18:49.297 07:16:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.297 07:16:13 event -- common/autotest_common.sh@10 -- # set +x 00:18:49.297 ************************************ 00:18:49.297 START TEST event_perf 00:18:49.297 ************************************ 00:18:49.297 07:16:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:49.298 Running I/O for 1 seconds...[2024-11-20 07:16:13.547934] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:49.298 [2024-11-20 07:16:13.548106] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58214 ] 00:18:49.556 [2024-11-20 07:16:13.733225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.814 [2024-11-20 07:16:13.878672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.814 [2024-11-20 07:16:13.878798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.814 [2024-11-20 07:16:13.878900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.814 [2024-11-20 07:16:13.878912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.246 Running I/O for 1 seconds... 00:18:51.246 lcore 0: 189749 00:18:51.246 lcore 1: 189748 00:18:51.246 lcore 2: 189749 00:18:51.246 lcore 3: 189749 00:18:51.246 done. 00:18:51.246 00:18:51.246 real 0m1.640s 00:18:51.246 user 0m4.389s 00:18:51.246 sys 0m0.125s 00:18:51.246 07:16:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.246 07:16:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:18:51.246 ************************************ 00:18:51.246 END TEST event_perf 00:18:51.246 ************************************ 00:18:51.246 07:16:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:51.246 07:16:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:51.247 07:16:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.247 07:16:15 event -- common/autotest_common.sh@10 -- # set +x 00:18:51.247 ************************************ 00:18:51.247 START TEST event_reactor 00:18:51.247 ************************************ 00:18:51.247 07:16:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:51.247 [2024-11-20 07:16:15.236645] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:51.247 [2024-11-20 07:16:15.236841] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:18:51.247 [2024-11-20 07:16:15.425195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.505 [2024-11-20 07:16:15.560475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.880 test_start 00:18:52.880 oneshot 00:18:52.880 tick 100 00:18:52.880 tick 100 00:18:52.880 tick 250 00:18:52.880 tick 100 00:18:52.880 tick 100 00:18:52.880 tick 100 00:18:52.880 tick 250 00:18:52.880 tick 500 00:18:52.880 tick 100 00:18:52.880 tick 100 00:18:52.880 tick 250 00:18:52.880 tick 100 00:18:52.880 tick 100 00:18:52.880 test_end 00:18:52.880 00:18:52.880 real 0m1.605s 00:18:52.880 user 0m1.389s 00:18:52.880 sys 0m0.105s 00:18:52.880 07:16:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.880 07:16:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:18:52.880 ************************************ 00:18:52.880 END TEST event_reactor 00:18:52.880 ************************************ 00:18:52.880 07:16:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:52.880 07:16:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:52.880 07:16:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.880 07:16:16 event -- common/autotest_common.sh@10 -- # set +x 00:18:52.880 ************************************ 00:18:52.880 START TEST event_reactor_perf 00:18:52.880 ************************************ 00:18:52.880 07:16:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:52.880 [2024-11-20 07:16:16.898351] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:52.880 [2024-11-20 07:16:16.898819] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:18:52.880 [2024-11-20 07:16:17.083619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.137 [2024-11-20 07:16:17.228326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.514 test_start 00:18:54.514 test_end 00:18:54.514 Performance: 273730 events per second 00:18:54.514 00:18:54.514 real 0m1.622s 00:18:54.514 user 0m1.403s 00:18:54.514 sys 0m0.108s 00:18:54.514 ************************************ 00:18:54.514 END TEST event_reactor_perf 00:18:54.514 ************************************ 00:18:54.514 07:16:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.514 07:16:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:18:54.514 07:16:18 event -- event/event.sh@49 -- # uname -s 00:18:54.514 07:16:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:54.514 07:16:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:54.514 07:16:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:54.514 07:16:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.514 07:16:18 event -- common/autotest_common.sh@10 -- # set +x 00:18:54.514 ************************************ 00:18:54.514 START TEST event_scheduler 00:18:54.514 ************************************ 00:18:54.514 07:16:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:54.514 * Looking for test storage... 00:18:54.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:18:54.514 07:16:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.514 07:16:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.514 07:16:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.514 07:16:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.514 07:16:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.515 07:16:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.515 --rc genhtml_branch_coverage=1 00:18:54.515 --rc genhtml_function_coverage=1 00:18:54.515 --rc genhtml_legend=1 00:18:54.515 --rc geninfo_all_blocks=1 00:18:54.515 --rc geninfo_unexecuted_blocks=1 00:18:54.515 00:18:54.515 ' 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.515 --rc genhtml_branch_coverage=1 00:18:54.515 --rc genhtml_function_coverage=1 00:18:54.515 --rc genhtml_legend=1 00:18:54.515 --rc geninfo_all_blocks=1 00:18:54.515 --rc geninfo_unexecuted_blocks=1 00:18:54.515 00:18:54.515 ' 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.515 --rc genhtml_branch_coverage=1 00:18:54.515 --rc genhtml_function_coverage=1 00:18:54.515 --rc genhtml_legend=1 00:18:54.515 --rc geninfo_all_blocks=1 00:18:54.515 --rc geninfo_unexecuted_blocks=1 00:18:54.515 00:18:54.515 ' 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.515 --rc genhtml_branch_coverage=1 00:18:54.515 --rc genhtml_function_coverage=1 00:18:54.515 --rc genhtml_legend=1 00:18:54.515 --rc geninfo_all_blocks=1 00:18:54.515 --rc geninfo_unexecuted_blocks=1 00:18:54.515 00:18:54.515 ' 00:18:54.515 07:16:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:54.515 07:16:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58366 00:18:54.515 07:16:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:54.515 07:16:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:54.515 07:16:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58366 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58366 ']' 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.515 07:16:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:54.774 [2024-11-20 07:16:18.801959] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:54.774 [2024-11-20 07:16:18.802148] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58366 ] 00:18:54.774 [2024-11-20 07:16:18.983642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.033 [2024-11-20 07:16:19.183213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.033 [2024-11-20 07:16:19.183345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.033 [2024-11-20 07:16:19.183473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.033 [2024-11-20 07:16:19.183696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:18:55.600 07:16:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:55.600 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:55.600 POWER: Cannot set governor of lcore 0 to userspace 00:18:55.600 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:55.600 POWER: Cannot set governor of lcore 0 to performance 00:18:55.600 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:55.600 POWER: Cannot set governor of lcore 0 to userspace 00:18:55.600 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:55.600 POWER: Cannot set governor of lcore 0 to userspace 00:18:55.600 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:18:55.600 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:18:55.600 POWER: Unable to set Power Management Environment for lcore 0 00:18:55.600 [2024-11-20 07:16:19.865974] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:18:55.600 [2024-11-20 07:16:19.866004] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:18:55.600 [2024-11-20 07:16:19.866019] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:18:55.600 [2024-11-20 07:16:19.866046] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:18:55.600 [2024-11-20 07:16:19.866059] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:18:55.600 [2024-11-20 07:16:19.866074] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.600 07:16:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.600 07:16:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 [2024-11-20 07:16:20.190131] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:56.163 07:16:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:56.163 07:16:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:56.163 07:16:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 ************************************ 00:18:56.163 START TEST scheduler_create_thread 00:18:56.163 ************************************ 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 2 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 3 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 4 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 5 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 6 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 7 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 8 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 9 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 10 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.164 07:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:57.098 07:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.098 07:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:57.098 07:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:57.098 07:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.098 07:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:58.470 ************************************ 00:18:58.470 END TEST scheduler_create_thread 00:18:58.470 ************************************ 00:18:58.471 07:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.471 00:18:58.471 real 0m2.136s 00:18:58.471 user 0m0.017s 00:18:58.471 sys 0m0.008s 00:18:58.471 07:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.471 07:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:58.471 07:16:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:58.471 07:16:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58366 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58366 ']' 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58366 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58366 00:18:58.471 killing process with pid 58366 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58366' 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58366 00:18:58.471 07:16:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58366 00:18:58.728 [2024-11-20 07:16:22.819887] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:59.661 ************************************ 00:18:59.661 END TEST event_scheduler 00:18:59.661 ************************************ 00:18:59.661 00:18:59.661 real 0m5.374s 00:18:59.661 user 0m9.317s 00:18:59.661 sys 0m0.500s 00:18:59.661 07:16:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.661 07:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:59.661 07:16:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:59.661 07:16:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:59.661 07:16:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.661 07:16:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.661 07:16:23 event -- common/autotest_common.sh@10 -- # set +x 00:18:59.661 ************************************ 00:18:59.661 START TEST app_repeat 00:18:59.661 ************************************ 00:18:59.661 07:16:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:59.661 07:16:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:59.919 Process app_repeat pid: 58472 00:18:59.919 spdk_app_start Round 0 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58472 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58472' 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:59.919 07:16:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58472 /var/tmp/spdk-nbd.sock 00:18:59.919 07:16:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58472 ']' 00:18:59.919 07:16:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:59.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:59.919 07:16:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.919 07:16:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:59.919 07:16:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.919 07:16:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:59.919 [2024-11-20 07:16:24.010149] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:18:59.919 [2024-11-20 07:16:24.010316] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58472 ] 00:18:59.919 [2024-11-20 07:16:24.185024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:00.178 [2024-11-20 07:16:24.334785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.178 [2024-11-20 07:16:24.334795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.109 07:16:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.109 07:16:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:01.109 07:16:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:01.109 Malloc0 00:19:01.365 07:16:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:01.623 Malloc1 00:19:01.623 07:16:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.623 07:16:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:01.885 /dev/nbd0 00:19:01.885 07:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:01.885 07:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:01.885 1+0 records in 00:19:01.885 1+0 records out 00:19:01.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288221 s, 14.2 MB/s 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:01.885 07:16:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:01.885 07:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.885 07:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.885 07:16:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:02.142 /dev/nbd1 00:19:02.142 07:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:02.142 07:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.142 07:16:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:02.399 1+0 records in 00:19:02.399 1+0 records out 00:19:02.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393938 s, 10.4 MB/s 00:19:02.399 07:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:02.399 07:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:02.399 07:16:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:02.399 07:16:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.399 07:16:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:02.399 07:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.399 07:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.399 07:16:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:02.399 07:16:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.399 07:16:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:02.657 { 00:19:02.657 "nbd_device": "/dev/nbd0", 00:19:02.657 "bdev_name": "Malloc0" 00:19:02.657 }, 00:19:02.657 { 00:19:02.657 "nbd_device": "/dev/nbd1", 00:19:02.657 "bdev_name": "Malloc1" 00:19:02.657 } 00:19:02.657 ]' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:02.657 { 00:19:02.657 "nbd_device": "/dev/nbd0", 00:19:02.657 "bdev_name": "Malloc0" 00:19:02.657 }, 00:19:02.657 { 00:19:02.657 "nbd_device": "/dev/nbd1", 00:19:02.657 "bdev_name": "Malloc1" 00:19:02.657 } 00:19:02.657 ]' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:02.657 /dev/nbd1' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:02.657 /dev/nbd1' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:02.657 256+0 records in 00:19:02.657 256+0 records out 00:19:02.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010454 s, 100 MB/s 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:02.657 256+0 records in 00:19:02.657 256+0 records out 00:19:02.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292558 s, 35.8 MB/s 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:02.657 256+0 records in 00:19:02.657 256+0 records out 00:19:02.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333337 s, 31.5 MB/s 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.657 07:16:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.225 07:16:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.483 07:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:03.740 07:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:03.741 07:16:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:03.741 07:16:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:04.306 07:16:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:05.241 [2024-11-20 07:16:29.437147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:05.499 [2024-11-20 07:16:29.565186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.499 [2024-11-20 07:16:29.565198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.499 [2024-11-20 07:16:29.756727] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:05.499 [2024-11-20 07:16:29.756852] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:07.481 spdk_app_start Round 1 00:19:07.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:07.481 07:16:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:07.481 07:16:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:19:07.481 07:16:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58472 /var/tmp/spdk-nbd.sock 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58472 ']' 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.481 07:16:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:07.481 07:16:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:08.048 Malloc0 00:19:08.048 07:16:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:08.307 Malloc1 00:19:08.307 07:16:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.307 07:16:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:08.566 /dev/nbd0 00:19:08.566 07:16:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.566 07:16:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:08.566 1+0 records in 00:19:08.566 1+0 records out 00:19:08.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319932 s, 12.8 MB/s 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.566 07:16:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:08.566 07:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.566 07:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.566 07:16:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:08.825 /dev/nbd1 00:19:08.825 07:16:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:08.825 07:16:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.825 07:16:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:08.825 1+0 records in 00:19:08.825 1+0 records out 00:19:08.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325865 s, 12.6 MB/s 00:19:08.825 07:16:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.825 07:16:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:08.825 07:16:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:08.825 07:16:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.825 07:16:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:08.825 07:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.825 07:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.825 07:16:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:08.825 07:16:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.825 07:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:09.083 { 00:19:09.083 "nbd_device": "/dev/nbd0", 00:19:09.083 "bdev_name": "Malloc0" 00:19:09.083 }, 00:19:09.083 { 00:19:09.083 "nbd_device": "/dev/nbd1", 00:19:09.083 "bdev_name": "Malloc1" 00:19:09.083 } 00:19:09.083 ]' 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:09.083 { 00:19:09.083 "nbd_device": "/dev/nbd0", 00:19:09.083 "bdev_name": "Malloc0" 00:19:09.083 }, 00:19:09.083 { 00:19:09.083 "nbd_device": "/dev/nbd1", 00:19:09.083 "bdev_name": "Malloc1" 00:19:09.083 } 00:19:09.083 ]' 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:09.083 /dev/nbd1' 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:09.083 /dev/nbd1' 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:09.083 07:16:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:09.084 07:16:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:09.084 256+0 records in 00:19:09.084 256+0 records out 00:19:09.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700036 s, 150 MB/s 00:19:09.084 07:16:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:09.084 07:16:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:09.342 256+0 records in 00:19:09.342 256+0 records out 00:19:09.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025316 s, 41.4 MB/s 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:09.342 256+0 records in 00:19:09.342 256+0 records out 00:19:09.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306859 s, 34.2 MB/s 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.342 07:16:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.604 07:16:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.863 07:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:10.122 07:16:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:10.122 07:16:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:10.694 07:16:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:11.629 [2024-11-20 07:16:35.896737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:11.886 [2024-11-20 07:16:36.029127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.886 [2024-11-20 07:16:36.029130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.144 [2024-11-20 07:16:36.221254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:12.144 [2024-11-20 07:16:36.221392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:14.043 spdk_app_start Round 2 00:19:14.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:14.043 07:16:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:14.044 07:16:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:19:14.044 07:16:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58472 /var/tmp/spdk-nbd.sock 00:19:14.044 07:16:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58472 ']' 00:19:14.044 07:16:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:14.044 07:16:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.044 07:16:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:14.044 07:16:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.044 07:16:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:14.044 07:16:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.044 07:16:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:14.044 07:16:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:14.301 Malloc0 00:19:14.301 07:16:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:14.867 Malloc1 00:19:14.867 07:16:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.867 07:16:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:15.124 /dev/nbd0 00:19:15.124 07:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:15.124 07:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:15.124 1+0 records in 00:19:15.124 1+0 records out 00:19:15.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280216 s, 14.6 MB/s 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:15.124 07:16:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:15.124 07:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:15.124 07:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.124 07:16:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:15.380 /dev/nbd1 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:15.380 1+0 records in 00:19:15.380 1+0 records out 00:19:15.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343829 s, 11.9 MB/s 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:15.380 07:16:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.380 07:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:15.946 { 00:19:15.946 "nbd_device": "/dev/nbd0", 00:19:15.946 "bdev_name": "Malloc0" 00:19:15.946 }, 00:19:15.946 { 00:19:15.946 "nbd_device": "/dev/nbd1", 00:19:15.946 "bdev_name": "Malloc1" 00:19:15.946 } 00:19:15.946 ]' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:15.946 { 00:19:15.946 "nbd_device": "/dev/nbd0", 00:19:15.946 "bdev_name": "Malloc0" 00:19:15.946 }, 00:19:15.946 { 00:19:15.946 "nbd_device": "/dev/nbd1", 00:19:15.946 "bdev_name": "Malloc1" 00:19:15.946 } 00:19:15.946 ]' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:15.946 /dev/nbd1' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:15.946 /dev/nbd1' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:15.946 256+0 records in 00:19:15.946 256+0 records out 00:19:15.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627476 s, 167 MB/s 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:15.946 07:16:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:15.946 256+0 records in 00:19:15.946 256+0 records out 00:19:15.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320264 s, 32.7 MB/s 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:15.946 256+0 records in 00:19:15.946 256+0 records out 00:19:15.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283577 s, 37.0 MB/s 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:15.946 07:16:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.512 07:16:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.769 07:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:17.026 07:16:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:17.026 07:16:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:17.590 07:16:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:18.522 [2024-11-20 07:16:42.723845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.780 [2024-11-20 07:16:42.888624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.780 [2024-11-20 07:16:42.888624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.038 [2024-11-20 07:16:43.097274] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:19.038 [2024-11-20 07:16:43.097423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:20.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:20.411 07:16:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58472 /var/tmp/spdk-nbd.sock 00:19:20.411 07:16:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58472 ']' 00:19:20.411 07:16:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:20.411 07:16:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.411 07:16:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:20.411 07:16:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.411 07:16:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:20.978 07:16:44 event.app_repeat -- event/event.sh@39 -- # killprocess 58472 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58472 ']' 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58472 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.978 07:16:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58472 00:19:20.978 killing process with pid 58472 00:19:20.978 07:16:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.978 07:16:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.978 07:16:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58472' 00:19:20.978 07:16:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58472 00:19:20.978 07:16:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58472 00:19:21.914 spdk_app_start is called in Round 0. 00:19:21.914 Shutdown signal received, stop current app iteration 00:19:21.914 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:19:21.914 spdk_app_start is called in Round 1. 00:19:21.914 Shutdown signal received, stop current app iteration 00:19:21.914 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:19:21.914 spdk_app_start is called in Round 2. 00:19:21.914 Shutdown signal received, stop current app iteration 00:19:21.914 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:19:21.914 spdk_app_start is called in Round 3. 00:19:21.914 Shutdown signal received, stop current app iteration 00:19:21.914 07:16:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:19:21.914 07:16:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:19:21.914 00:19:21.914 real 0m22.017s 00:19:21.914 user 0m48.886s 00:19:21.914 sys 0m3.156s 00:19:21.914 07:16:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.914 ************************************ 00:19:21.914 END TEST app_repeat 00:19:21.914 ************************************ 00:19:21.914 07:16:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:21.914 07:16:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:19:21.914 07:16:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:21.914 07:16:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.914 07:16:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.914 07:16:46 event -- common/autotest_common.sh@10 -- # set +x 00:19:21.914 ************************************ 00:19:21.914 START TEST cpu_locks 00:19:21.914 ************************************ 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:21.914 * Looking for test storage... 00:19:21.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.914 07:16:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.914 --rc genhtml_branch_coverage=1 00:19:21.914 --rc genhtml_function_coverage=1 00:19:21.914 --rc genhtml_legend=1 00:19:21.914 --rc geninfo_all_blocks=1 00:19:21.914 --rc geninfo_unexecuted_blocks=1 00:19:21.914 00:19:21.914 ' 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.914 --rc genhtml_branch_coverage=1 00:19:21.914 --rc genhtml_function_coverage=1 00:19:21.914 --rc genhtml_legend=1 00:19:21.914 --rc geninfo_all_blocks=1 00:19:21.914 --rc geninfo_unexecuted_blocks=1 00:19:21.914 00:19:21.914 ' 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.914 --rc genhtml_branch_coverage=1 00:19:21.914 --rc genhtml_function_coverage=1 00:19:21.914 --rc genhtml_legend=1 00:19:21.914 --rc geninfo_all_blocks=1 00:19:21.914 --rc geninfo_unexecuted_blocks=1 00:19:21.914 00:19:21.914 ' 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.914 --rc genhtml_branch_coverage=1 00:19:21.914 --rc genhtml_function_coverage=1 00:19:21.914 --rc genhtml_legend=1 00:19:21.914 --rc geninfo_all_blocks=1 00:19:21.914 --rc geninfo_unexecuted_blocks=1 00:19:21.914 00:19:21.914 ' 00:19:21.914 07:16:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:19:21.914 07:16:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:19:21.914 07:16:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:19:21.914 07:16:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.914 07:16:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:21.914 ************************************ 00:19:21.914 START TEST default_locks 00:19:21.914 ************************************ 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58947 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58947 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58947 ']' 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.173 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:22.173 [2024-11-20 07:16:46.326784] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:22.173 [2024-11-20 07:16:46.326977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58947 ] 00:19:22.446 [2024-11-20 07:16:46.508196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.446 [2024-11-20 07:16:46.663979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.380 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.380 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:19:23.380 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58947 00:19:23.380 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:23.380 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58947 00:19:23.945 07:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58947 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58947 ']' 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58947 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58947 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.946 killing process with pid 58947 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58947' 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58947 00:19:23.946 07:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58947 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58947 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58947 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58947 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58947 ']' 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58947) - No such process 00:19:26.476 ERROR: process (pid: 58947) is no longer running 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:26.476 00:19:26.476 real 0m4.116s 00:19:26.476 user 0m4.211s 00:19:26.476 sys 0m0.754s 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.476 07:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 ************************************ 00:19:26.476 END TEST default_locks 00:19:26.476 ************************************ 00:19:26.476 07:16:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:19:26.476 07:16:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:26.476 07:16:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.476 07:16:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 ************************************ 00:19:26.476 START TEST default_locks_via_rpc 00:19:26.476 ************************************ 00:19:26.476 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:19:26.476 07:16:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59024 00:19:26.476 07:16:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:26.476 07:16:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59024 00:19:26.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.476 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59024 ']' 00:19:26.477 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.477 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.477 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.477 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.477 07:16:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:26.477 [2024-11-20 07:16:50.499278] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:26.477 [2024-11-20 07:16:50.499465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59024 ] 00:19:26.477 [2024-11-20 07:16:50.681987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.735 [2024-11-20 07:16:50.816526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59024 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59024 00:19:27.669 07:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59024 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59024 ']' 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59024 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59024 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.928 killing process with pid 59024 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59024' 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59024 00:19:27.928 07:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59024 00:19:30.460 00:19:30.460 real 0m4.019s 00:19:30.460 user 0m4.056s 00:19:30.460 sys 0m0.745s 00:19:30.460 07:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.460 07:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:30.460 ************************************ 00:19:30.460 END TEST default_locks_via_rpc 00:19:30.460 ************************************ 00:19:30.460 07:16:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:19:30.460 07:16:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:30.460 07:16:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.460 07:16:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:30.460 ************************************ 00:19:30.460 START TEST non_locking_app_on_locked_coremask 00:19:30.460 ************************************ 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59098 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59098 /var/tmp/spdk.sock 00:19:30.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59098 ']' 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.460 07:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:30.460 [2024-11-20 07:16:54.587662] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:30.460 [2024-11-20 07:16:54.588189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59098 ] 00:19:30.799 [2024-11-20 07:16:54.772542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.799 [2024-11-20 07:16:54.902079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59114 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59114 /var/tmp/spdk2.sock 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59114 ']' 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:31.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.749 07:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:31.749 [2024-11-20 07:16:55.966368] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:31.749 [2024-11-20 07:16:55.966840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59114 ] 00:19:32.007 [2024-11-20 07:16:56.174393] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:32.007 [2024-11-20 07:16:56.174477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.266 [2024-11-20 07:16:56.444022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.796 07:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.796 07:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:34.796 07:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59098 00:19:34.796 07:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59098 00:19:34.796 07:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59098 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59098 ']' 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59098 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59098 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.431 killing process with pid 59098 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59098' 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59098 00:19:35.431 07:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59098 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59114 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59114 ']' 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59114 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59114 00:19:40.698 killing process with pid 59114 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59114' 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59114 00:19:40.698 07:17:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59114 00:19:42.075 ************************************ 00:19:42.075 END TEST non_locking_app_on_locked_coremask 00:19:42.075 ************************************ 00:19:42.075 00:19:42.075 real 0m11.762s 00:19:42.075 user 0m12.387s 00:19:42.075 sys 0m1.558s 00:19:42.075 07:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.075 07:17:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:42.075 07:17:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:19:42.075 07:17:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.075 07:17:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.075 07:17:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:42.075 ************************************ 00:19:42.075 START TEST locking_app_on_unlocked_coremask 00:19:42.075 ************************************ 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59265 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59265 ']' 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.075 07:17:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:42.333 [2024-11-20 07:17:06.380499] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:42.333 [2024-11-20 07:17:06.380738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:19:42.333 [2024-11-20 07:17:06.570503] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:42.333 [2024-11-20 07:17:06.570622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.609 [2024-11-20 07:17:06.724292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59287 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:43.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.546 07:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:43.546 [2024-11-20 07:17:07.710856] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:43.546 [2024-11-20 07:17:07.711311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:19:43.805 [2024-11-20 07:17:07.905285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.064 [2024-11-20 07:17:08.170317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.642 07:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.642 07:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:46.642 07:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59287 00:19:46.642 07:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59287 00:19:46.642 07:17:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59265 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59265 ']' 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59265 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59265 00:19:47.209 killing process with pid 59265 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59265' 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59265 00:19:47.209 07:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59265 00:19:51.394 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59287 00:19:51.394 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59287 ']' 00:19:51.394 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59287 00:19:51.394 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59287 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.652 killing process with pid 59287 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59287' 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59287 00:19:51.652 07:17:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59287 00:19:54.182 ************************************ 00:19:54.182 END TEST locking_app_on_unlocked_coremask 00:19:54.182 ************************************ 00:19:54.182 00:19:54.182 real 0m11.725s 00:19:54.182 user 0m12.236s 00:19:54.182 sys 0m1.522s 00:19:54.182 07:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.182 07:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:54.182 07:17:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:19:54.182 07:17:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.182 07:17:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.182 07:17:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:54.182 ************************************ 00:19:54.182 START TEST locking_app_on_locked_coremask 00:19:54.182 ************************************ 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:19:54.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59435 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59435 /var/tmp/spdk.sock 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59435 ']' 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.182 07:17:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:54.182 [2024-11-20 07:17:18.134597] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:54.182 [2024-11-20 07:17:18.134759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:19:54.182 [2024-11-20 07:17:18.312602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.182 [2024-11-20 07:17:18.444600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.123 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.123 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:55.123 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59451 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59451 /var/tmp/spdk2.sock 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59451 /var/tmp/spdk2.sock 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:55.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59451 /var/tmp/spdk2.sock 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59451 ']' 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.124 07:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:55.382 [2024-11-20 07:17:19.466856] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:55.382 [2024-11-20 07:17:19.467464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59451 ] 00:19:55.641 [2024-11-20 07:17:19.674555] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59435 has claimed it. 00:19:55.641 [2024-11-20 07:17:19.674742] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:55.900 ERROR: process (pid: 59451) is no longer running 00:19:55.900 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59451) - No such process 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59435 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:55.900 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59435 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59435 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59435 ']' 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59435 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59435 00:19:56.467 killing process with pid 59435 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59435' 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59435 00:19:56.467 07:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59435 00:19:58.999 00:19:58.999 real 0m4.839s 00:19:58.999 user 0m5.180s 00:19:58.999 sys 0m0.925s 00:19:58.999 07:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.999 ************************************ 00:19:58.999 END TEST locking_app_on_locked_coremask 00:19:58.999 07:17:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:58.999 ************************************ 00:19:58.999 07:17:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:19:58.999 07:17:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.999 07:17:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.999 07:17:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:58.999 ************************************ 00:19:58.999 START TEST locking_overlapped_coremask 00:19:58.999 ************************************ 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59526 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59526 /var/tmp/spdk.sock 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59526 ']' 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.999 07:17:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:58.999 [2024-11-20 07:17:23.045391] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:58.999 [2024-11-20 07:17:23.045616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:19:58.999 [2024-11-20 07:17:23.238440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.258 [2024-11-20 07:17:23.400544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.258 [2024-11-20 07:17:23.400652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.258 [2024-11-20 07:17:23.400668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59544 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59544 /var/tmp/spdk2.sock 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59544 /var/tmp/spdk2.sock 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59544 /var/tmp/spdk2.sock 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59544 ']' 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:00.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.194 07:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 [2024-11-20 07:17:24.451063] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:00.194 [2024-11-20 07:17:24.451507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59544 ] 00:20:00.452 [2024-11-20 07:17:24.656936] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59526 has claimed it. 00:20:00.452 [2024-11-20 07:17:24.657038] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:01.019 ERROR: process (pid: 59544) is no longer running 00:20:01.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59544) - No such process 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59526 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59526 ']' 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59526 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59526 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59526' 00:20:01.019 killing process with pid 59526 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59526 00:20:01.019 07:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59526 00:20:03.550 00:20:03.550 real 0m4.441s 00:20:03.550 user 0m12.009s 00:20:03.550 sys 0m0.705s 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:03.550 ************************************ 00:20:03.550 END TEST locking_overlapped_coremask 00:20:03.550 ************************************ 00:20:03.550 07:17:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:20:03.550 07:17:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:03.550 07:17:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.550 07:17:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:03.550 ************************************ 00:20:03.550 START TEST locking_overlapped_coremask_via_rpc 00:20:03.550 ************************************ 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59608 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59608 /var/tmp/spdk.sock 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59608 ']' 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.550 07:17:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:03.550 [2024-11-20 07:17:27.535969] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:03.550 [2024-11-20 07:17:27.536160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:20:03.550 [2024-11-20 07:17:27.730381] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:03.550 [2024-11-20 07:17:27.730477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.813 [2024-11-20 07:17:27.890985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.813 [2024-11-20 07:17:27.891059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.813 [2024-11-20 07:17:27.891063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.747 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.747 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:04.747 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59626 00:20:04.747 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59626 /var/tmp/spdk2.sock 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59626 ']' 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:04.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.748 07:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.748 [2024-11-20 07:17:28.923186] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:04.748 [2024-11-20 07:17:28.923837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:20:05.015 [2024-11-20 07:17:29.118900] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:05.015 [2024-11-20 07:17:29.119192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:05.276 [2024-11-20 07:17:29.391954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.276 [2024-11-20 07:17:29.395742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.276 [2024-11-20 07:17:29.395760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:07.810 [2024-11-20 07:17:31.775899] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59608 has claimed it. 00:20:07.810 request: 00:20:07.810 { 00:20:07.810 "method": "framework_enable_cpumask_locks", 00:20:07.810 "req_id": 1 00:20:07.810 } 00:20:07.810 Got JSON-RPC error response 00:20:07.810 response: 00:20:07.810 { 00:20:07.810 "code": -32603, 00:20:07.810 "message": "Failed to claim CPU core: 2" 00:20:07.810 } 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59608 /var/tmp/spdk.sock 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59608 ']' 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.810 07:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59626 /var/tmp/spdk2.sock 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59626 ']' 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:08.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.078 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:20:08.335 ************************************ 00:20:08.335 END TEST locking_overlapped_coremask_via_rpc 00:20:08.335 ************************************ 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:08.335 00:20:08.335 real 0m5.075s 00:20:08.335 user 0m1.990s 00:20:08.335 sys 0m0.260s 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.335 07:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:08.335 07:17:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:20:08.335 07:17:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59608 ]] 00:20:08.335 07:17:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59608 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59608 ']' 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59608 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59608 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.335 killing process with pid 59608 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59608' 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59608 00:20:08.335 07:17:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59608 00:20:10.862 07:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59626 ]] 00:20:10.862 07:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59626 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59626 ']' 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59626 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59626 00:20:10.862 killing process with pid 59626 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59626' 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59626 00:20:10.862 07:17:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59626 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59608 ]] 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59608 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59608 ']' 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59608 00:20:13.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59608) - No such process 00:20:13.392 Process with pid 59608 is not found 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59608 is not found' 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59626 ]] 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59626 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59626 ']' 00:20:13.392 Process with pid 59626 is not found 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59626 00:20:13.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59626) - No such process 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59626 is not found' 00:20:13.392 07:17:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:13.392 00:20:13.392 real 0m51.282s 00:20:13.392 user 1m30.260s 00:20:13.392 sys 0m7.759s 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.392 07:17:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 ************************************ 00:20:13.392 END TEST cpu_locks 00:20:13.392 ************************************ 00:20:13.392 ************************************ 00:20:13.392 END TEST event 00:20:13.392 ************************************ 00:20:13.392 00:20:13.392 real 1m24.049s 00:20:13.392 user 2m35.861s 00:20:13.392 sys 0m12.024s 00:20:13.392 07:17:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.392 07:17:37 event -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 07:17:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:13.392 07:17:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.392 07:17:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.392 07:17:37 -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 ************************************ 00:20:13.392 START TEST thread 00:20:13.392 ************************************ 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:13.392 * Looking for test storage... 00:20:13.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:13.392 07:17:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.392 07:17:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.392 07:17:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.392 07:17:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.392 07:17:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.392 07:17:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.392 07:17:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.392 07:17:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.392 07:17:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.392 07:17:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.392 07:17:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.392 07:17:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:20:13.392 07:17:37 thread -- scripts/common.sh@345 -- # : 1 00:20:13.392 07:17:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.392 07:17:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.392 07:17:37 thread -- scripts/common.sh@365 -- # decimal 1 00:20:13.392 07:17:37 thread -- scripts/common.sh@353 -- # local d=1 00:20:13.392 07:17:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.392 07:17:37 thread -- scripts/common.sh@355 -- # echo 1 00:20:13.392 07:17:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.392 07:17:37 thread -- scripts/common.sh@366 -- # decimal 2 00:20:13.392 07:17:37 thread -- scripts/common.sh@353 -- # local d=2 00:20:13.392 07:17:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.392 07:17:37 thread -- scripts/common.sh@355 -- # echo 2 00:20:13.392 07:17:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.392 07:17:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.392 07:17:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.392 07:17:37 thread -- scripts/common.sh@368 -- # return 0 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.392 --rc genhtml_branch_coverage=1 00:20:13.392 --rc genhtml_function_coverage=1 00:20:13.392 --rc genhtml_legend=1 00:20:13.392 --rc geninfo_all_blocks=1 00:20:13.392 --rc geninfo_unexecuted_blocks=1 00:20:13.392 00:20:13.392 ' 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.392 --rc genhtml_branch_coverage=1 00:20:13.392 --rc genhtml_function_coverage=1 00:20:13.392 --rc genhtml_legend=1 00:20:13.392 --rc geninfo_all_blocks=1 00:20:13.392 --rc geninfo_unexecuted_blocks=1 00:20:13.392 00:20:13.392 ' 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.392 --rc genhtml_branch_coverage=1 00:20:13.392 --rc genhtml_function_coverage=1 00:20:13.392 --rc genhtml_legend=1 00:20:13.392 --rc geninfo_all_blocks=1 00:20:13.392 --rc geninfo_unexecuted_blocks=1 00:20:13.392 00:20:13.392 ' 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.392 --rc genhtml_branch_coverage=1 00:20:13.392 --rc genhtml_function_coverage=1 00:20:13.392 --rc genhtml_legend=1 00:20:13.392 --rc geninfo_all_blocks=1 00:20:13.392 --rc geninfo_unexecuted_blocks=1 00:20:13.392 00:20:13.392 ' 00:20:13.392 07:17:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.392 07:17:37 thread -- common/autotest_common.sh@10 -- # set +x 00:20:13.392 ************************************ 00:20:13.392 START TEST thread_poller_perf 00:20:13.392 ************************************ 00:20:13.392 07:17:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:13.392 [2024-11-20 07:17:37.613700] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:13.393 [2024-11-20 07:17:37.613858] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:20:13.650 [2024-11-20 07:17:37.795408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.909 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:20:13.909 [2024-11-20 07:17:37.981733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.285 [2024-11-20T07:17:39.574Z] ====================================== 00:20:15.285 [2024-11-20T07:17:39.574Z] busy:2216219487 (cyc) 00:20:15.285 [2024-11-20T07:17:39.574Z] total_run_count: 312000 00:20:15.285 [2024-11-20T07:17:39.574Z] tsc_hz: 2200000000 (cyc) 00:20:15.285 [2024-11-20T07:17:39.574Z] ====================================== 00:20:15.285 [2024-11-20T07:17:39.574Z] poller_cost: 7103 (cyc), 3228 (nsec) 00:20:15.285 00:20:15.285 real 0m1.660s 00:20:15.285 user 0m1.443s 00:20:15.285 sys 0m0.106s 00:20:15.285 07:17:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.285 ************************************ 00:20:15.285 END TEST thread_poller_perf 00:20:15.285 ************************************ 00:20:15.285 07:17:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:15.285 07:17:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:15.285 07:17:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:20:15.285 07:17:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.285 07:17:39 thread -- common/autotest_common.sh@10 -- # set +x 00:20:15.285 ************************************ 00:20:15.285 START TEST thread_poller_perf 00:20:15.285 ************************************ 00:20:15.285 07:17:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:15.285 [2024-11-20 07:17:39.342478] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:15.285 [2024-11-20 07:17:39.342813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59869 ] 00:20:15.285 [2024-11-20 07:17:39.538095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.543 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:20:15.543 [2024-11-20 07:17:39.689201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.919 [2024-11-20T07:17:41.208Z] ====================================== 00:20:16.919 [2024-11-20T07:17:41.208Z] busy:2204186216 (cyc) 00:20:16.919 [2024-11-20T07:17:41.208Z] total_run_count: 3852000 00:20:16.919 [2024-11-20T07:17:41.208Z] tsc_hz: 2200000000 (cyc) 00:20:16.919 [2024-11-20T07:17:41.208Z] ====================================== 00:20:16.919 [2024-11-20T07:17:41.208Z] poller_cost: 572 (cyc), 260 (nsec) 00:20:16.919 00:20:16.919 real 0m1.664s 00:20:16.919 user 0m1.435s 00:20:16.919 sys 0m0.119s 00:20:16.919 07:17:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.919 ************************************ 00:20:16.919 END TEST thread_poller_perf 00:20:16.919 ************************************ 00:20:16.919 07:17:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:16.919 07:17:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:20:16.919 00:20:16.919 real 0m3.601s 00:20:16.919 user 0m3.007s 00:20:16.919 sys 0m0.373s 00:20:16.919 ************************************ 00:20:16.919 END TEST thread 00:20:16.919 ************************************ 00:20:16.919 07:17:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.919 07:17:40 thread -- common/autotest_common.sh@10 -- # set +x 00:20:16.919 07:17:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:20:16.919 07:17:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:20:16.919 07:17:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.919 07:17:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.919 07:17:41 -- common/autotest_common.sh@10 -- # set +x 00:20:16.919 ************************************ 00:20:16.919 START TEST app_cmdline 00:20:16.919 ************************************ 00:20:16.919 07:17:41 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:20:16.919 * Looking for test storage... 00:20:16.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:16.919 07:17:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:16.919 07:17:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:20:16.919 07:17:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:16.919 07:17:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:16.919 07:17:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.919 07:17:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.920 07:17:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:20:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.178 07:17:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.178 --rc genhtml_branch_coverage=1 00:20:17.178 --rc genhtml_function_coverage=1 00:20:17.178 --rc genhtml_legend=1 00:20:17.178 --rc geninfo_all_blocks=1 00:20:17.178 --rc geninfo_unexecuted_blocks=1 00:20:17.178 00:20:17.178 ' 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.178 --rc genhtml_branch_coverage=1 00:20:17.178 --rc genhtml_function_coverage=1 00:20:17.178 --rc genhtml_legend=1 00:20:17.178 --rc geninfo_all_blocks=1 00:20:17.178 --rc geninfo_unexecuted_blocks=1 00:20:17.178 00:20:17.178 ' 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.178 --rc genhtml_branch_coverage=1 00:20:17.178 --rc genhtml_function_coverage=1 00:20:17.178 --rc genhtml_legend=1 00:20:17.178 --rc geninfo_all_blocks=1 00:20:17.178 --rc geninfo_unexecuted_blocks=1 00:20:17.178 00:20:17.178 ' 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.178 --rc genhtml_branch_coverage=1 00:20:17.178 --rc genhtml_function_coverage=1 00:20:17.178 --rc genhtml_legend=1 00:20:17.178 --rc geninfo_all_blocks=1 00:20:17.178 --rc geninfo_unexecuted_blocks=1 00:20:17.178 00:20:17.178 ' 00:20:17.178 07:17:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:20:17.178 07:17:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59958 00:20:17.178 07:17:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59958 00:20:17.178 07:17:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59958 ']' 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.178 07:17:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:17.178 [2024-11-20 07:17:41.328110] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:17.178 [2024-11-20 07:17:41.328526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:20:17.438 [2024-11-20 07:17:41.505491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.438 [2024-11-20 07:17:41.662529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.405 07:17:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.405 07:17:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:20:18.405 07:17:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:20:18.664 { 00:20:18.664 "version": "SPDK v25.01-pre git sha1 400f484f7", 00:20:18.664 "fields": { 00:20:18.664 "major": 25, 00:20:18.664 "minor": 1, 00:20:18.664 "patch": 0, 00:20:18.664 "suffix": "-pre", 00:20:18.664 "commit": "400f484f7" 00:20:18.664 } 00:20:18.664 } 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:20:18.664 07:17:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.664 07:17:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:20:18.664 07:17:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:20:18.664 07:17:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.923 07:17:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:20:18.923 07:17:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:20:18.923 07:17:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:18.923 07:17:42 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:19.181 request: 00:20:19.181 { 00:20:19.181 "method": "env_dpdk_get_mem_stats", 00:20:19.181 "req_id": 1 00:20:19.181 } 00:20:19.181 Got JSON-RPC error response 00:20:19.181 response: 00:20:19.181 { 00:20:19.181 "code": -32601, 00:20:19.181 "message": "Method not found" 00:20:19.181 } 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:19.181 07:17:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59958 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59958 ']' 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59958 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59958 00:20:19.181 killing process with pid 59958 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59958' 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@973 -- # kill 59958 00:20:19.181 07:17:43 app_cmdline -- common/autotest_common.sh@978 -- # wait 59958 00:20:21.713 00:20:21.713 real 0m4.666s 00:20:21.713 user 0m4.967s 00:20:21.713 sys 0m0.790s 00:20:21.713 ************************************ 00:20:21.713 END TEST app_cmdline 00:20:21.713 ************************************ 00:20:21.713 07:17:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.713 07:17:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:21.713 07:17:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:21.713 07:17:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.713 07:17:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.713 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:20:21.713 ************************************ 00:20:21.713 START TEST version 00:20:21.713 ************************************ 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:21.713 * Looking for test storage... 00:20:21.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.713 07:17:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.713 07:17:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.713 07:17:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.713 07:17:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.713 07:17:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.713 07:17:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.713 07:17:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.713 07:17:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.713 07:17:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.713 07:17:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.713 07:17:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.713 07:17:45 version -- scripts/common.sh@344 -- # case "$op" in 00:20:21.713 07:17:45 version -- scripts/common.sh@345 -- # : 1 00:20:21.713 07:17:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.713 07:17:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.713 07:17:45 version -- scripts/common.sh@365 -- # decimal 1 00:20:21.713 07:17:45 version -- scripts/common.sh@353 -- # local d=1 00:20:21.713 07:17:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.713 07:17:45 version -- scripts/common.sh@355 -- # echo 1 00:20:21.713 07:17:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.713 07:17:45 version -- scripts/common.sh@366 -- # decimal 2 00:20:21.713 07:17:45 version -- scripts/common.sh@353 -- # local d=2 00:20:21.713 07:17:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.713 07:17:45 version -- scripts/common.sh@355 -- # echo 2 00:20:21.713 07:17:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.713 07:17:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.713 07:17:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.713 07:17:45 version -- scripts/common.sh@368 -- # return 0 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.713 --rc genhtml_branch_coverage=1 00:20:21.713 --rc genhtml_function_coverage=1 00:20:21.713 --rc genhtml_legend=1 00:20:21.713 --rc geninfo_all_blocks=1 00:20:21.713 --rc geninfo_unexecuted_blocks=1 00:20:21.713 00:20:21.713 ' 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.713 --rc genhtml_branch_coverage=1 00:20:21.713 --rc genhtml_function_coverage=1 00:20:21.713 --rc genhtml_legend=1 00:20:21.713 --rc geninfo_all_blocks=1 00:20:21.713 --rc geninfo_unexecuted_blocks=1 00:20:21.713 00:20:21.713 ' 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.713 --rc genhtml_branch_coverage=1 00:20:21.713 --rc genhtml_function_coverage=1 00:20:21.713 --rc genhtml_legend=1 00:20:21.713 --rc geninfo_all_blocks=1 00:20:21.713 --rc geninfo_unexecuted_blocks=1 00:20:21.713 00:20:21.713 ' 00:20:21.713 07:17:45 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.713 --rc genhtml_branch_coverage=1 00:20:21.713 --rc genhtml_function_coverage=1 00:20:21.713 --rc genhtml_legend=1 00:20:21.713 --rc geninfo_all_blocks=1 00:20:21.713 --rc geninfo_unexecuted_blocks=1 00:20:21.713 00:20:21.713 ' 00:20:21.713 07:17:45 version -- app/version.sh@17 -- # get_header_version major 00:20:21.713 07:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # cut -f2 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:20:21.713 07:17:45 version -- app/version.sh@17 -- # major=25 00:20:21.713 07:17:45 version -- app/version.sh@18 -- # get_header_version minor 00:20:21.713 07:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # cut -f2 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:20:21.713 07:17:45 version -- app/version.sh@18 -- # minor=1 00:20:21.713 07:17:45 version -- app/version.sh@19 -- # get_header_version patch 00:20:21.713 07:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # cut -f2 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:20:21.713 07:17:45 version -- app/version.sh@19 -- # patch=0 00:20:21.713 07:17:45 version -- app/version.sh@20 -- # get_header_version suffix 00:20:21.713 07:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:20:21.713 07:17:45 version -- app/version.sh@14 -- # cut -f2 00:20:21.713 07:17:45 version -- app/version.sh@20 -- # suffix=-pre 00:20:21.713 07:17:45 version -- app/version.sh@22 -- # version=25.1 00:20:21.713 07:17:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:20:21.713 07:17:45 version -- app/version.sh@28 -- # version=25.1rc0 00:20:21.713 07:17:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:20:21.713 07:17:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:20:21.972 07:17:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:20:21.972 07:17:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:20:21.972 00:20:21.972 real 0m0.251s 00:20:21.972 user 0m0.150s 00:20:21.972 sys 0m0.130s 00:20:21.972 ************************************ 00:20:21.972 END TEST version 00:20:21.972 ************************************ 00:20:21.972 07:17:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.972 07:17:46 version -- common/autotest_common.sh@10 -- # set +x 00:20:21.972 07:17:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:20:21.972 07:17:46 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:20:21.972 07:17:46 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:20:21.972 07:17:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.972 07:17:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.972 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:20:21.972 ************************************ 00:20:21.972 START TEST bdev_raid 00:20:21.972 ************************************ 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:20:21.972 * Looking for test storage... 00:20:21.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@345 -- # : 1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.972 07:17:46 bdev_raid -- scripts/common.sh@368 -- # return 0 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.972 --rc genhtml_branch_coverage=1 00:20:21.972 --rc genhtml_function_coverage=1 00:20:21.972 --rc genhtml_legend=1 00:20:21.972 --rc geninfo_all_blocks=1 00:20:21.972 --rc geninfo_unexecuted_blocks=1 00:20:21.972 00:20:21.972 ' 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.972 --rc genhtml_branch_coverage=1 00:20:21.972 --rc genhtml_function_coverage=1 00:20:21.972 --rc genhtml_legend=1 00:20:21.972 --rc geninfo_all_blocks=1 00:20:21.972 --rc geninfo_unexecuted_blocks=1 00:20:21.972 00:20:21.972 ' 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.972 --rc genhtml_branch_coverage=1 00:20:21.972 --rc genhtml_function_coverage=1 00:20:21.972 --rc genhtml_legend=1 00:20:21.972 --rc geninfo_all_blocks=1 00:20:21.972 --rc geninfo_unexecuted_blocks=1 00:20:21.972 00:20:21.972 ' 00:20:21.972 07:17:46 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.972 --rc genhtml_branch_coverage=1 00:20:21.972 --rc genhtml_function_coverage=1 00:20:21.972 --rc genhtml_legend=1 00:20:21.972 --rc geninfo_all_blocks=1 00:20:21.972 --rc geninfo_unexecuted_blocks=1 00:20:21.972 00:20:21.972 ' 00:20:21.972 07:17:46 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:22.231 07:17:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:20:22.231 07:17:46 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:20:22.231 07:17:46 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:20:22.231 07:17:46 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:20:22.231 07:17:46 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:20:22.231 07:17:46 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:20:22.231 07:17:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.231 07:17:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.231 07:17:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:22.231 ************************************ 00:20:22.231 START TEST raid1_resize_data_offset_test 00:20:22.231 ************************************ 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60151 00:20:22.231 Process raid pid: 60151 00:20:22.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60151' 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60151 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60151 ']' 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.231 07:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.231 [2024-11-20 07:17:46.388108] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:22.231 [2024-11-20 07:17:46.388543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.489 [2024-11-20 07:17:46.578139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.489 [2024-11-20 07:17:46.735751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.746 [2024-11-20 07:17:46.977215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.746 [2024-11-20 07:17:46.977524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.312 malloc0 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.312 malloc1 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.312 null0 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.312 [2024-11-20 07:17:47.589066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:20:23.312 [2024-11-20 07:17:47.591826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:23.312 [2024-11-20 07:17:47.591912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:20:23.312 [2024-11-20 07:17:47.592170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:23.312 [2024-11-20 07:17:47.592198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:20:23.312 [2024-11-20 07:17:47.592571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:23.312 [2024-11-20 07:17:47.592865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:23.312 [2024-11-20 07:17:47.592893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:23.312 [2024-11-20 07:17:47.593147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.312 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.570 [2024-11-20 07:17:47.653340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.570 07:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.135 malloc2 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.135 [2024-11-20 07:17:48.293990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:24.135 [2024-11-20 07:17:48.313372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.135 [2024-11-20 07:17:48.316342] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60151 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60151 ']' 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60151 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60151 00:20:24.135 killing process with pid 60151 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60151' 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60151 00:20:24.135 07:17:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60151 00:20:24.135 [2024-11-20 07:17:48.406806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.135 [2024-11-20 07:17:48.409283] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:20:24.135 [2024-11-20 07:17:48.409393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.135 [2024-11-20 07:17:48.409428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:20:24.393 [2024-11-20 07:17:48.443685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.393 [2024-11-20 07:17:48.444250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.393 [2024-11-20 07:17:48.444282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:26.296 [2024-11-20 07:17:50.268317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.231 07:17:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:20:27.231 00:20:27.231 real 0m5.148s 00:20:27.231 user 0m4.903s 00:20:27.231 sys 0m0.851s 00:20:27.231 ************************************ 00:20:27.231 END TEST raid1_resize_data_offset_test 00:20:27.231 ************************************ 00:20:27.231 07:17:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.231 07:17:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.231 07:17:51 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:20:27.231 07:17:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.231 07:17:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.231 07:17:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.231 ************************************ 00:20:27.231 START TEST raid0_resize_superblock_test 00:20:27.231 ************************************ 00:20:27.231 Process raid pid: 60240 00:20:27.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.231 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:20:27.231 07:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:20:27.231 07:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60240 00:20:27.231 07:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60240' 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60240 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60240 ']' 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.232 07:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.491 [2024-11-20 07:17:51.601000] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:27.491 [2024-11-20 07:17:51.601223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.750 [2024-11-20 07:17:51.792495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.750 [2024-11-20 07:17:51.945678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.008 [2024-11-20 07:17:52.181084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.008 [2024-11-20 07:17:52.181160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.630 07:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.630 07:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:28.630 07:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:20:28.630 07:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.630 07:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 malloc0 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 [2024-11-20 07:17:53.202661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:29.198 [2024-11-20 07:17:53.202914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.198 [2024-11-20 07:17:53.202987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:29.198 [2024-11-20 07:17:53.203019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.198 [2024-11-20 07:17:53.206341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.198 [2024-11-20 07:17:53.206555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:29.198 pt0 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 976a7db5-c6d0-40d1-aa82-2a61e32c5041 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 04076f18-4143-4d72-807d-07e31a2ebd1f 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 82b1371c-0449-4ee8-9a1f-b2f8234d5fdb 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 [2024-11-20 07:17:53.402758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 04076f18-4143-4d72-807d-07e31a2ebd1f is claimed 00:20:29.198 [2024-11-20 07:17:53.403083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 82b1371c-0449-4ee8-9a1f-b2f8234d5fdb is claimed 00:20:29.198 [2024-11-20 07:17:53.403303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:29.198 [2024-11-20 07:17:53.403334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:20:29.198 [2024-11-20 07:17:53.403740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:29.198 [2024-11-20 07:17:53.404040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:29.198 [2024-11-20 07:17:53.404079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:29.198 [2024-11-20 07:17:53.404287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:20:29.198 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:20:29.457 [2024-11-20 07:17:53.531121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 [2024-11-20 07:17:53.587380] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:29.457 [2024-11-20 07:17:53.587646] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '04076f18-4143-4d72-807d-07e31a2ebd1f' was resized: old size 131072, new size 204800 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 [2024-11-20 07:17:53.599016] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:29.457 [2024-11-20 07:17:53.599224] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '82b1371c-0449-4ee8-9a1f-b2f8234d5fdb' was resized: old size 131072, new size 204800 00:20:29.457 [2024-11-20 07:17:53.599433] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:29.457 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:20:29.458 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:29.458 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:29.458 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.458 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.458 [2024-11-20 07:17:53.719346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.458 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.717 [2024-11-20 07:17:53.766930] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:20:29.717 [2024-11-20 07:17:53.767256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:20:29.717 [2024-11-20 07:17:53.767295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.717 [2024-11-20 07:17:53.767330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:20:29.717 [2024-11-20 07:17:53.767568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.717 [2024-11-20 07:17:53.767687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.717 [2024-11-20 07:17:53.767724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.717 [2024-11-20 07:17:53.774780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:29.717 [2024-11-20 07:17:53.774859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.717 [2024-11-20 07:17:53.774897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:29.717 [2024-11-20 07:17:53.774919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.717 [2024-11-20 07:17:53.778268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.717 [2024-11-20 07:17:53.778323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:29.717 pt0 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.717 [2024-11-20 07:17:53.781199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 04076f18-4143-4d72-807d-07e31a2ebd1f 00:20:29.717 [2024-11-20 07:17:53.781496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 04076f18-4143-4d72-807d-07e31a2ebd1f is claimed 00:20:29.717 [2024-11-20 07:17:53.781711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 82b1371c-0449-4ee8-9a1f-b2f8234d5fdb 00:20:29.717 [2024-11-20 07:17:53.781785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 82b1371c-0449-4ee8-9a1f-b2f8234d5fdb is claimed 00:20:29.717 [2024-11-20 07:17:53.781989] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 82b1371c-0449-4ee8-9a1f-b2f8234d5fdb (2) smaller than existing raid bdev Raid (3) 00:20:29.717 [2024-11-20 07:17:53.782034] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 04076f18-4143-4d72-807d-07e31a2ebd1f: File exists 00:20:29.717 [2024-11-20 07:17:53.782120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:29.717 [2024-11-20 07:17:53.782161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:20:29.717 [2024-11-20 07:17:53.782562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:29.717 [2024-11-20 07:17:53.782845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:29.717 [2024-11-20 07:17:53.782872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:20:29.717 [2024-11-20 07:17:53.783312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.717 [2024-11-20 07:17:53.795324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60240 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60240 ']' 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60240 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60240 00:20:29.717 killing process with pid 60240 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60240' 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60240 00:20:29.717 [2024-11-20 07:17:53.881780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:29.717 07:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60240 00:20:29.717 [2024-11-20 07:17:53.881891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.717 [2024-11-20 07:17:53.881965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.717 [2024-11-20 07:17:53.881982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:20:31.094 [2024-11-20 07:17:55.345361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.469 ************************************ 00:20:32.469 END TEST raid0_resize_superblock_test 00:20:32.469 ************************************ 00:20:32.469 07:17:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:20:32.469 00:20:32.469 real 0m5.053s 00:20:32.469 user 0m5.182s 00:20:32.469 sys 0m0.834s 00:20:32.469 07:17:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.469 07:17:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.469 07:17:56 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:20:32.469 07:17:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.469 07:17:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.469 07:17:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.469 ************************************ 00:20:32.469 START TEST raid1_resize_superblock_test 00:20:32.469 ************************************ 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:20:32.469 Process raid pid: 60346 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60346 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60346' 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60346 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60346 ']' 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.469 07:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.469 [2024-11-20 07:17:56.701542] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:32.469 [2024-11-20 07:17:56.701760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.727 [2024-11-20 07:17:56.888682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.986 [2024-11-20 07:17:57.041954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.244 [2024-11-20 07:17:57.282886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.244 [2024-11-20 07:17:57.283213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.502 07:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.502 07:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:33.502 07:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:20:33.502 07:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.502 07:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.069 malloc0 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.069 [2024-11-20 07:17:58.258337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:34.069 [2024-11-20 07:17:58.258479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.069 [2024-11-20 07:17:58.258523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:34.069 [2024-11-20 07:17:58.258551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.069 [2024-11-20 07:17:58.261855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.069 [2024-11-20 07:17:58.261913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:34.069 pt0 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.069 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 8aa6435e-7110-4212-81c6-f250addf9d94 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 b19963db-db4c-4d0b-a303-fc967f70256e 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 e6bb2ef4-cd83-4181-b92b-5b58da582b24 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 [2024-11-20 07:17:58.464537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b19963db-db4c-4d0b-a303-fc967f70256e is claimed 00:20:34.328 [2024-11-20 07:17:58.464754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e6bb2ef4-cd83-4181-b92b-5b58da582b24 is claimed 00:20:34.328 [2024-11-20 07:17:58.465096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:34.328 [2024-11-20 07:17:58.465161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:20:34.328 [2024-11-20 07:17:58.465615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:34.328 [2024-11-20 07:17:58.466025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:34.328 [2024-11-20 07:17:58.466070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:34.328 [2024-11-20 07:17:58.466345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:20:34.328 [2024-11-20 07:17:58.592929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.328 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 [2024-11-20 07:17:58.641000] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:34.587 [2024-11-20 07:17:58.641086] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b19963db-db4c-4d0b-a303-fc967f70256e' was resized: old size 131072, new size 204800 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 [2024-11-20 07:17:58.648638] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:34.587 [2024-11-20 07:17:58.648671] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e6bb2ef4-cd83-4181-b92b-5b58da582b24' was resized: old size 131072, new size 204800 00:20:34.587 [2024-11-20 07:17:58.648716] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 [2024-11-20 07:17:58.768782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 [2024-11-20 07:17:58.816517] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:20:34.587 [2024-11-20 07:17:58.816667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:20:34.587 [2024-11-20 07:17:58.816717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:20:34.587 [2024-11-20 07:17:58.816952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.587 [2024-11-20 07:17:58.817317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.587 [2024-11-20 07:17:58.817441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.587 [2024-11-20 07:17:58.817471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 [2024-11-20 07:17:58.824446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:34.587 [2024-11-20 07:17:58.824528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.587 [2024-11-20 07:17:58.824568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:34.587 [2024-11-20 07:17:58.824641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.587 [2024-11-20 07:17:58.827919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.587 [2024-11-20 07:17:58.828140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:34.587 pt0 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.587 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.587 [2024-11-20 07:17:58.830801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b19963db-db4c-4d0b-a303-fc967f70256e 00:20:34.587 [2024-11-20 07:17:58.830905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b19963db-db4c-4d0b-a303-fc967f70256e is claimed 00:20:34.587 [2024-11-20 07:17:58.831067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e6bb2ef4-cd83-4181-b92b-5b58da582b24 00:20:34.587 [2024-11-20 07:17:58.831110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e6bb2ef4-cd83-4181-b92b-5b58da582b24 is claimed 00:20:34.587 [2024-11-20 07:17:58.831290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e6bb2ef4-cd83-4181-b92b-5b58da582b24 (2) smaller than existing raid bdev Raid (3) 00:20:34.587 [2024-11-20 07:17:58.831330] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b19963db-db4c-4d0b-a303-fc967f70256e: File exists 00:20:34.587 [2024-11-20 07:17:58.831401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:34.587 [2024-11-20 07:17:58.831438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:34.587 [2024-11-20 07:17:58.831822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:34.587 [2024-11-20 07:17:58.832058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:34.587 [2024-11-20 07:17:58.832076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:20:34.588 [2024-11-20 07:17:58.832290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.588 [2024-11-20 07:17:58.844796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.588 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60346 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60346 ']' 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60346 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60346 00:20:34.847 killing process with pid 60346 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60346' 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60346 00:20:34.847 07:17:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60346 00:20:34.847 [2024-11-20 07:17:58.924233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:34.847 [2024-11-20 07:17:58.924399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.847 [2024-11-20 07:17:58.924562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.847 [2024-11-20 07:17:58.924624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:20:36.222 [2024-11-20 07:18:00.367892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.596 07:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:20:37.596 00:20:37.596 real 0m4.933s 00:20:37.596 user 0m5.075s 00:20:37.596 sys 0m0.759s 00:20:37.596 07:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.596 07:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.596 ************************************ 00:20:37.596 END TEST raid1_resize_superblock_test 00:20:37.596 ************************************ 00:20:37.596 07:18:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:20:37.596 07:18:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:20:37.596 07:18:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:20:37.596 07:18:01 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:20:37.596 07:18:01 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:20:37.596 07:18:01 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:20:37.596 07:18:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:37.596 07:18:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.596 07:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.596 ************************************ 00:20:37.596 START TEST raid_function_test_raid0 00:20:37.596 ************************************ 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:20:37.596 Process raid pid: 60449 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60449 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60449' 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60449 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60449 ']' 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.596 07:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:37.596 [2024-11-20 07:18:01.713258] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:37.596 [2024-11-20 07:18:01.713690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.855 [2024-11-20 07:18:01.900814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.855 [2024-11-20 07:18:02.053897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.113 [2024-11-20 07:18:02.294071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.113 [2024-11-20 07:18:02.294169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:38.680 Base_1 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:38.680 Base_2 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:38.680 [2024-11-20 07:18:02.813187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:38.680 [2024-11-20 07:18:02.815846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:38.680 [2024-11-20 07:18:02.815954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:38.680 [2024-11-20 07:18:02.815979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:38.680 [2024-11-20 07:18:02.816314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:38.680 [2024-11-20 07:18:02.816528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:38.680 [2024-11-20 07:18:02.816546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:20:38.680 [2024-11-20 07:18:02.816763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.680 07:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:20:38.938 [2024-11-20 07:18:03.145373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:38.938 /dev/nbd0 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.938 1+0 records in 00:20:38.938 1+0 records out 00:20:38.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456716 s, 9.0 MB/s 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:20:38.938 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.939 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.939 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:20:38.939 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:38.939 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:39.504 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:39.504 { 00:20:39.504 "nbd_device": "/dev/nbd0", 00:20:39.504 "bdev_name": "raid" 00:20:39.504 } 00:20:39.504 ]' 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:39.505 { 00:20:39.505 "nbd_device": "/dev/nbd0", 00:20:39.505 "bdev_name": "raid" 00:20:39.505 } 00:20:39.505 ]' 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:20:39.505 4096+0 records in 00:20:39.505 4096+0 records out 00:20:39.505 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0277738 s, 75.5 MB/s 00:20:39.505 07:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:20:39.763 4096+0 records in 00:20:39.763 4096+0 records out 00:20:39.763 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.406317 s, 5.2 MB/s 00:20:39.763 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:20:40.020 128+0 records in 00:20:40.020 128+0 records out 00:20:40.020 65536 bytes (66 kB, 64 KiB) copied, 0.000692405 s, 94.6 MB/s 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:20:40.020 2035+0 records in 00:20:40.020 2035+0 records out 00:20:40.020 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143624 s, 72.5 MB/s 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:20:40.020 456+0 records in 00:20:40.020 456+0 records out 00:20:40.020 233472 bytes (233 kB, 228 KiB) copied, 0.0019977 s, 117 MB/s 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.020 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:40.278 [2024-11-20 07:18:04.474256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:40.278 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:40.536 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60449 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60449 ']' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60449 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60449 00:20:40.794 killing process with pid 60449 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60449' 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60449 00:20:40.794 [2024-11-20 07:18:04.903867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:40.794 07:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60449 00:20:40.794 [2024-11-20 07:18:04.904085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.794 [2024-11-20 07:18:04.904168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.794 [2024-11-20 07:18:04.904204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:20:41.052 [2024-11-20 07:18:05.110649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.454 ************************************ 00:20:42.454 END TEST raid_function_test_raid0 00:20:42.454 ************************************ 00:20:42.454 07:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:20:42.454 00:20:42.454 real 0m4.767s 00:20:42.454 user 0m5.747s 00:20:42.454 sys 0m1.135s 00:20:42.454 07:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.454 07:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:42.454 07:18:06 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:20:42.454 07:18:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.454 07:18:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.454 07:18:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.454 ************************************ 00:20:42.454 START TEST raid_function_test_concat 00:20:42.454 ************************************ 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60583 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:42.454 Process raid pid: 60583 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60583' 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60583 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60583 ']' 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.454 07:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:42.454 [2024-11-20 07:18:06.536153] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:42.454 [2024-11-20 07:18:06.536356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.454 [2024-11-20 07:18:06.729886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.713 [2024-11-20 07:18:06.896604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.971 [2024-11-20 07:18:07.156216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.971 [2024-11-20 07:18:07.156307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:43.539 Base_1 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:43.539 Base_2 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:43.539 [2024-11-20 07:18:07.689080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:43.539 [2024-11-20 07:18:07.691577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:43.539 [2024-11-20 07:18:07.691717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:43.539 [2024-11-20 07:18:07.691740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:43.539 [2024-11-20 07:18:07.692074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:43.539 [2024-11-20 07:18:07.692279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:43.539 [2024-11-20 07:18:07.692296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:20:43.539 [2024-11-20 07:18:07.692483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.539 07:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:20:43.796 [2024-11-20 07:18:08.069257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:44.055 /dev/nbd0 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.055 1+0 records in 00:20:44.055 1+0 records out 00:20:44.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400192 s, 10.2 MB/s 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:44.055 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:44.313 { 00:20:44.313 "nbd_device": "/dev/nbd0", 00:20:44.313 "bdev_name": "raid" 00:20:44.313 } 00:20:44.313 ]' 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:44.313 { 00:20:44.313 "nbd_device": "/dev/nbd0", 00:20:44.313 "bdev_name": "raid" 00:20:44.313 } 00:20:44.313 ]' 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:20:44.313 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:20:44.570 4096+0 records in 00:20:44.570 4096+0 records out 00:20:44.570 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0305984 s, 68.5 MB/s 00:20:44.570 07:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:20:44.829 4096+0 records in 00:20:44.829 4096+0 records out 00:20:44.829 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.390184 s, 5.4 MB/s 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:20:44.829 128+0 records in 00:20:44.829 128+0 records out 00:20:44.829 65536 bytes (66 kB, 64 KiB) copied, 0.0010137 s, 64.7 MB/s 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:20:44.829 2035+0 records in 00:20:44.829 2035+0 records out 00:20:44.829 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00914308 s, 114 MB/s 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:20:44.829 456+0 records in 00:20:44.829 456+0 records out 00:20:44.829 233472 bytes (233 kB, 228 KiB) copied, 0.00363621 s, 64.2 MB/s 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.829 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:45.394 [2024-11-20 07:18:09.416222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:45.394 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60583 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60583 ']' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60583 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60583 00:20:45.653 killing process with pid 60583 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60583' 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60583 00:20:45.653 [2024-11-20 07:18:09.851962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:45.653 07:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60583 00:20:45.653 [2024-11-20 07:18:09.852140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.653 [2024-11-20 07:18:09.852208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.653 [2024-11-20 07:18:09.852227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:20:45.912 [2024-11-20 07:18:10.032093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.847 ************************************ 00:20:46.847 END TEST raid_function_test_concat 00:20:46.847 ************************************ 00:20:46.847 07:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:20:46.847 00:20:46.847 real 0m4.693s 00:20:46.847 user 0m5.842s 00:20:46.847 sys 0m1.125s 00:20:46.847 07:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.847 07:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:47.105 07:18:11 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:20:47.105 07:18:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.105 07:18:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.105 07:18:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.105 ************************************ 00:20:47.105 START TEST raid0_resize_test 00:20:47.105 ************************************ 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:20:47.105 Process raid pid: 60717 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60717 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60717' 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60717 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60717 ']' 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.105 07:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.105 [2024-11-20 07:18:11.283690] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:47.105 [2024-11-20 07:18:11.283891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.364 [2024-11-20 07:18:11.478029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.364 [2024-11-20 07:18:11.649850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.622 [2024-11-20 07:18:11.878735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.622 [2024-11-20 07:18:11.878811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.189 Base_1 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.189 Base_2 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.189 [2024-11-20 07:18:12.391969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:48.189 [2024-11-20 07:18:12.394551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:48.189 [2024-11-20 07:18:12.394657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:48.189 [2024-11-20 07:18:12.394676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:48.189 [2024-11-20 07:18:12.394996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:48.189 [2024-11-20 07:18:12.395161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:48.189 [2024-11-20 07:18:12.395176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:48.189 [2024-11-20 07:18:12.395351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.189 [2024-11-20 07:18:12.399967] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:48.189 [2024-11-20 07:18:12.400001] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:20:48.189 true 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:20:48.189 [2024-11-20 07:18:12.412244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.189 [2024-11-20 07:18:12.464044] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:48.189 [2024-11-20 07:18:12.464077] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:20:48.189 [2024-11-20 07:18:12.464120] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:20:48.189 true 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.189 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.488 [2024-11-20 07:18:12.476228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60717 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60717 ']' 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60717 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60717 00:20:48.488 killing process with pid 60717 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60717' 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60717 00:20:48.488 [2024-11-20 07:18:12.559188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.488 07:18:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60717 00:20:48.488 [2024-11-20 07:18:12.559301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.488 [2024-11-20 07:18:12.559377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.488 [2024-11-20 07:18:12.559392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:48.489 [2024-11-20 07:18:12.576015] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.885 ************************************ 00:20:49.885 END TEST raid0_resize_test 00:20:49.885 ************************************ 00:20:49.885 07:18:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:20:49.885 00:20:49.885 real 0m2.591s 00:20:49.885 user 0m2.928s 00:20:49.885 sys 0m0.410s 00:20:49.885 07:18:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.885 07:18:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.885 07:18:13 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:20:49.885 07:18:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.885 07:18:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.885 07:18:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.885 ************************************ 00:20:49.885 START TEST raid1_resize_test 00:20:49.885 ************************************ 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60784 00:20:49.885 Process raid pid: 60784 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60784' 00:20:49.885 07:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60784 00:20:49.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.886 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60784 ']' 00:20:49.886 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.886 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.886 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.886 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.886 07:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.886 [2024-11-20 07:18:13.925148] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:49.886 [2024-11-20 07:18:13.925366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.886 [2024-11-20 07:18:14.122713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.144 [2024-11-20 07:18:14.318930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.402 [2024-11-20 07:18:14.580574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.402 [2024-11-20 07:18:14.580681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.969 07:18:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.969 07:18:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:20:50.969 07:18:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:20:50.969 07:18:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.969 07:18:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.969 Base_1 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.969 Base_2 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.969 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.969 [2024-11-20 07:18:15.018095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:50.969 [2024-11-20 07:18:15.020764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:50.969 [2024-11-20 07:18:15.020864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:50.969 [2024-11-20 07:18:15.020888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:50.969 [2024-11-20 07:18:15.021262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:50.969 [2024-11-20 07:18:15.021468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:50.969 [2024-11-20 07:18:15.021489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:50.970 [2024-11-20 07:18:15.021739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.970 [2024-11-20 07:18:15.026043] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:50.970 [2024-11-20 07:18:15.026083] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:20:50.970 true 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:20:50.970 [2024-11-20 07:18:15.038330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.970 [2024-11-20 07:18:15.098107] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:50.970 [2024-11-20 07:18:15.098158] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:20:50.970 [2024-11-20 07:18:15.098215] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:20:50.970 true 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:20:50.970 [2024-11-20 07:18:15.110288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60784 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60784 ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60784 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60784 00:20:50.970 killing process with pid 60784 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60784' 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60784 00:20:50.970 07:18:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60784 00:20:50.970 [2024-11-20 07:18:15.186768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.970 [2024-11-20 07:18:15.186921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.970 [2024-11-20 07:18:15.187723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.970 [2024-11-20 07:18:15.187759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:50.970 [2024-11-20 07:18:15.203209] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.345 07:18:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:20:52.345 00:20:52.345 real 0m2.446s 00:20:52.345 user 0m2.754s 00:20:52.345 sys 0m0.415s 00:20:52.345 ************************************ 00:20:52.345 END TEST raid1_resize_test 00:20:52.345 ************************************ 00:20:52.345 07:18:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.345 07:18:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.345 07:18:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:20:52.345 07:18:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:52.345 07:18:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:20:52.345 07:18:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:52.345 07:18:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.345 07:18:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:52.345 ************************************ 00:20:52.345 START TEST raid_state_function_test 00:20:52.345 ************************************ 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:52.345 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60841 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:52.346 Process raid pid: 60841 00:20:52.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60841' 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60841 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60841 ']' 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.346 07:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.346 [2024-11-20 07:18:16.440109] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:52.346 [2024-11-20 07:18:16.440677] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.604 [2024-11-20 07:18:16.634764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.604 [2024-11-20 07:18:16.765980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.862 [2024-11-20 07:18:16.970402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.863 [2024-11-20 07:18:16.970482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.121 [2024-11-20 07:18:17.373238] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.121 [2024-11-20 07:18:17.373325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.121 [2024-11-20 07:18:17.373342] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.121 [2024-11-20 07:18:17.373358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.121 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.380 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.380 "name": "Existed_Raid", 00:20:53.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.380 "strip_size_kb": 64, 00:20:53.380 "state": "configuring", 00:20:53.380 "raid_level": "raid0", 00:20:53.380 "superblock": false, 00:20:53.380 "num_base_bdevs": 2, 00:20:53.380 "num_base_bdevs_discovered": 0, 00:20:53.380 "num_base_bdevs_operational": 2, 00:20:53.380 "base_bdevs_list": [ 00:20:53.380 { 00:20:53.380 "name": "BaseBdev1", 00:20:53.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.380 "is_configured": false, 00:20:53.380 "data_offset": 0, 00:20:53.380 "data_size": 0 00:20:53.380 }, 00:20:53.380 { 00:20:53.380 "name": "BaseBdev2", 00:20:53.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.380 "is_configured": false, 00:20:53.380 "data_offset": 0, 00:20:53.380 "data_size": 0 00:20:53.380 } 00:20:53.380 ] 00:20:53.380 }' 00:20:53.380 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.380 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.639 [2024-11-20 07:18:17.889419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:53.639 [2024-11-20 07:18:17.889469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.639 [2024-11-20 07:18:17.897389] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.639 [2024-11-20 07:18:17.897459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.639 [2024-11-20 07:18:17.897474] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.639 [2024-11-20 07:18:17.897491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.639 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 [2024-11-20 07:18:17.942987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:53.900 BaseBdev1 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 [ 00:20:53.900 { 00:20:53.900 "name": "BaseBdev1", 00:20:53.900 "aliases": [ 00:20:53.900 "fb9e3a15-5760-47e6-96d7-a9800f417e50" 00:20:53.900 ], 00:20:53.900 "product_name": "Malloc disk", 00:20:53.900 "block_size": 512, 00:20:53.900 "num_blocks": 65536, 00:20:53.900 "uuid": "fb9e3a15-5760-47e6-96d7-a9800f417e50", 00:20:53.900 "assigned_rate_limits": { 00:20:53.900 "rw_ios_per_sec": 0, 00:20:53.900 "rw_mbytes_per_sec": 0, 00:20:53.900 "r_mbytes_per_sec": 0, 00:20:53.900 "w_mbytes_per_sec": 0 00:20:53.900 }, 00:20:53.900 "claimed": true, 00:20:53.900 "claim_type": "exclusive_write", 00:20:53.900 "zoned": false, 00:20:53.900 "supported_io_types": { 00:20:53.900 "read": true, 00:20:53.900 "write": true, 00:20:53.900 "unmap": true, 00:20:53.900 "flush": true, 00:20:53.900 "reset": true, 00:20:53.900 "nvme_admin": false, 00:20:53.900 "nvme_io": false, 00:20:53.900 "nvme_io_md": false, 00:20:53.900 "write_zeroes": true, 00:20:53.900 "zcopy": true, 00:20:53.900 "get_zone_info": false, 00:20:53.900 "zone_management": false, 00:20:53.900 "zone_append": false, 00:20:53.900 "compare": false, 00:20:53.900 "compare_and_write": false, 00:20:53.900 "abort": true, 00:20:53.900 "seek_hole": false, 00:20:53.900 "seek_data": false, 00:20:53.900 "copy": true, 00:20:53.900 "nvme_iov_md": false 00:20:53.900 }, 00:20:53.900 "memory_domains": [ 00:20:53.900 { 00:20:53.900 "dma_device_id": "system", 00:20:53.900 "dma_device_type": 1 00:20:53.900 }, 00:20:53.900 { 00:20:53.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.900 "dma_device_type": 2 00:20:53.900 } 00:20:53.900 ], 00:20:53.900 "driver_specific": {} 00:20:53.900 } 00:20:53.900 ] 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.900 07:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.900 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.900 "name": "Existed_Raid", 00:20:53.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.900 "strip_size_kb": 64, 00:20:53.900 "state": "configuring", 00:20:53.900 "raid_level": "raid0", 00:20:53.900 "superblock": false, 00:20:53.900 "num_base_bdevs": 2, 00:20:53.900 "num_base_bdevs_discovered": 1, 00:20:53.900 "num_base_bdevs_operational": 2, 00:20:53.900 "base_bdevs_list": [ 00:20:53.900 { 00:20:53.900 "name": "BaseBdev1", 00:20:53.900 "uuid": "fb9e3a15-5760-47e6-96d7-a9800f417e50", 00:20:53.900 "is_configured": true, 00:20:53.900 "data_offset": 0, 00:20:53.900 "data_size": 65536 00:20:53.900 }, 00:20:53.900 { 00:20:53.900 "name": "BaseBdev2", 00:20:53.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.900 "is_configured": false, 00:20:53.900 "data_offset": 0, 00:20:53.900 "data_size": 0 00:20:53.900 } 00:20:53.900 ] 00:20:53.900 }' 00:20:53.900 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.900 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.468 [2024-11-20 07:18:18.539366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:54.468 [2024-11-20 07:18:18.539705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.468 [2024-11-20 07:18:18.547406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.468 [2024-11-20 07:18:18.550369] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.468 [2024-11-20 07:18:18.550435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.468 "name": "Existed_Raid", 00:20:54.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.468 "strip_size_kb": 64, 00:20:54.468 "state": "configuring", 00:20:54.468 "raid_level": "raid0", 00:20:54.468 "superblock": false, 00:20:54.468 "num_base_bdevs": 2, 00:20:54.468 "num_base_bdevs_discovered": 1, 00:20:54.468 "num_base_bdevs_operational": 2, 00:20:54.468 "base_bdevs_list": [ 00:20:54.468 { 00:20:54.468 "name": "BaseBdev1", 00:20:54.468 "uuid": "fb9e3a15-5760-47e6-96d7-a9800f417e50", 00:20:54.468 "is_configured": true, 00:20:54.468 "data_offset": 0, 00:20:54.468 "data_size": 65536 00:20:54.468 }, 00:20:54.468 { 00:20:54.468 "name": "BaseBdev2", 00:20:54.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.468 "is_configured": false, 00:20:54.468 "data_offset": 0, 00:20:54.468 "data_size": 0 00:20:54.468 } 00:20:54.468 ] 00:20:54.468 }' 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.468 07:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.033 [2024-11-20 07:18:19.154286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.033 [2024-11-20 07:18:19.154355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:55.033 [2024-11-20 07:18:19.154373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:55.033 [2024-11-20 07:18:19.154882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:55.033 [2024-11-20 07:18:19.155183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:55.033 [2024-11-20 07:18:19.155228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:55.033 BaseBdev2 00:20:55.033 [2024-11-20 07:18:19.155673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.033 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.033 [ 00:20:55.033 { 00:20:55.033 "name": "BaseBdev2", 00:20:55.033 "aliases": [ 00:20:55.033 "dab4c5f1-7e26-405e-bc9c-c6574cd48c67" 00:20:55.033 ], 00:20:55.033 "product_name": "Malloc disk", 00:20:55.033 "block_size": 512, 00:20:55.033 "num_blocks": 65536, 00:20:55.033 "uuid": "dab4c5f1-7e26-405e-bc9c-c6574cd48c67", 00:20:55.033 "assigned_rate_limits": { 00:20:55.033 "rw_ios_per_sec": 0, 00:20:55.033 "rw_mbytes_per_sec": 0, 00:20:55.033 "r_mbytes_per_sec": 0, 00:20:55.033 "w_mbytes_per_sec": 0 00:20:55.033 }, 00:20:55.033 "claimed": true, 00:20:55.033 "claim_type": "exclusive_write", 00:20:55.033 "zoned": false, 00:20:55.033 "supported_io_types": { 00:20:55.033 "read": true, 00:20:55.033 "write": true, 00:20:55.033 "unmap": true, 00:20:55.033 "flush": true, 00:20:55.033 "reset": true, 00:20:55.034 "nvme_admin": false, 00:20:55.034 "nvme_io": false, 00:20:55.034 "nvme_io_md": false, 00:20:55.034 "write_zeroes": true, 00:20:55.034 "zcopy": true, 00:20:55.034 "get_zone_info": false, 00:20:55.034 "zone_management": false, 00:20:55.034 "zone_append": false, 00:20:55.034 "compare": false, 00:20:55.034 "compare_and_write": false, 00:20:55.034 "abort": true, 00:20:55.034 "seek_hole": false, 00:20:55.034 "seek_data": false, 00:20:55.034 "copy": true, 00:20:55.034 "nvme_iov_md": false 00:20:55.034 }, 00:20:55.034 "memory_domains": [ 00:20:55.034 { 00:20:55.034 "dma_device_id": "system", 00:20:55.034 "dma_device_type": 1 00:20:55.034 }, 00:20:55.034 { 00:20:55.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.034 "dma_device_type": 2 00:20:55.034 } 00:20:55.034 ], 00:20:55.034 "driver_specific": {} 00:20:55.034 } 00:20:55.034 ] 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.034 "name": "Existed_Raid", 00:20:55.034 "uuid": "744747e0-3683-4da8-86ee-e9dab77acac1", 00:20:55.034 "strip_size_kb": 64, 00:20:55.034 "state": "online", 00:20:55.034 "raid_level": "raid0", 00:20:55.034 "superblock": false, 00:20:55.034 "num_base_bdevs": 2, 00:20:55.034 "num_base_bdevs_discovered": 2, 00:20:55.034 "num_base_bdevs_operational": 2, 00:20:55.034 "base_bdevs_list": [ 00:20:55.034 { 00:20:55.034 "name": "BaseBdev1", 00:20:55.034 "uuid": "fb9e3a15-5760-47e6-96d7-a9800f417e50", 00:20:55.034 "is_configured": true, 00:20:55.034 "data_offset": 0, 00:20:55.034 "data_size": 65536 00:20:55.034 }, 00:20:55.034 { 00:20:55.034 "name": "BaseBdev2", 00:20:55.034 "uuid": "dab4c5f1-7e26-405e-bc9c-c6574cd48c67", 00:20:55.034 "is_configured": true, 00:20:55.034 "data_offset": 0, 00:20:55.034 "data_size": 65536 00:20:55.034 } 00:20:55.034 ] 00:20:55.034 }' 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.034 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.600 [2024-11-20 07:18:19.662841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.600 "name": "Existed_Raid", 00:20:55.600 "aliases": [ 00:20:55.600 "744747e0-3683-4da8-86ee-e9dab77acac1" 00:20:55.600 ], 00:20:55.600 "product_name": "Raid Volume", 00:20:55.600 "block_size": 512, 00:20:55.600 "num_blocks": 131072, 00:20:55.600 "uuid": "744747e0-3683-4da8-86ee-e9dab77acac1", 00:20:55.600 "assigned_rate_limits": { 00:20:55.600 "rw_ios_per_sec": 0, 00:20:55.600 "rw_mbytes_per_sec": 0, 00:20:55.600 "r_mbytes_per_sec": 0, 00:20:55.600 "w_mbytes_per_sec": 0 00:20:55.600 }, 00:20:55.600 "claimed": false, 00:20:55.600 "zoned": false, 00:20:55.600 "supported_io_types": { 00:20:55.600 "read": true, 00:20:55.600 "write": true, 00:20:55.600 "unmap": true, 00:20:55.600 "flush": true, 00:20:55.600 "reset": true, 00:20:55.600 "nvme_admin": false, 00:20:55.600 "nvme_io": false, 00:20:55.600 "nvme_io_md": false, 00:20:55.600 "write_zeroes": true, 00:20:55.600 "zcopy": false, 00:20:55.600 "get_zone_info": false, 00:20:55.600 "zone_management": false, 00:20:55.600 "zone_append": false, 00:20:55.600 "compare": false, 00:20:55.600 "compare_and_write": false, 00:20:55.600 "abort": false, 00:20:55.600 "seek_hole": false, 00:20:55.600 "seek_data": false, 00:20:55.600 "copy": false, 00:20:55.600 "nvme_iov_md": false 00:20:55.600 }, 00:20:55.600 "memory_domains": [ 00:20:55.600 { 00:20:55.600 "dma_device_id": "system", 00:20:55.600 "dma_device_type": 1 00:20:55.600 }, 00:20:55.600 { 00:20:55.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.600 "dma_device_type": 2 00:20:55.600 }, 00:20:55.600 { 00:20:55.600 "dma_device_id": "system", 00:20:55.600 "dma_device_type": 1 00:20:55.600 }, 00:20:55.600 { 00:20:55.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.600 "dma_device_type": 2 00:20:55.600 } 00:20:55.600 ], 00:20:55.600 "driver_specific": { 00:20:55.600 "raid": { 00:20:55.600 "uuid": "744747e0-3683-4da8-86ee-e9dab77acac1", 00:20:55.600 "strip_size_kb": 64, 00:20:55.600 "state": "online", 00:20:55.600 "raid_level": "raid0", 00:20:55.600 "superblock": false, 00:20:55.600 "num_base_bdevs": 2, 00:20:55.600 "num_base_bdevs_discovered": 2, 00:20:55.600 "num_base_bdevs_operational": 2, 00:20:55.600 "base_bdevs_list": [ 00:20:55.600 { 00:20:55.600 "name": "BaseBdev1", 00:20:55.600 "uuid": "fb9e3a15-5760-47e6-96d7-a9800f417e50", 00:20:55.600 "is_configured": true, 00:20:55.600 "data_offset": 0, 00:20:55.600 "data_size": 65536 00:20:55.600 }, 00:20:55.600 { 00:20:55.600 "name": "BaseBdev2", 00:20:55.600 "uuid": "dab4c5f1-7e26-405e-bc9c-c6574cd48c67", 00:20:55.600 "is_configured": true, 00:20:55.600 "data_offset": 0, 00:20:55.600 "data_size": 65536 00:20:55.600 } 00:20:55.600 ] 00:20:55.600 } 00:20:55.600 } 00:20:55.600 }' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:55.600 BaseBdev2' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.600 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.860 [2024-11-20 07:18:19.910648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:55.860 [2024-11-20 07:18:19.910839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:55.860 [2024-11-20 07:18:19.910938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.860 07:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.860 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.860 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.860 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.860 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.860 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.861 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.861 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.861 "name": "Existed_Raid", 00:20:55.861 "uuid": "744747e0-3683-4da8-86ee-e9dab77acac1", 00:20:55.861 "strip_size_kb": 64, 00:20:55.861 "state": "offline", 00:20:55.861 "raid_level": "raid0", 00:20:55.861 "superblock": false, 00:20:55.861 "num_base_bdevs": 2, 00:20:55.861 "num_base_bdevs_discovered": 1, 00:20:55.861 "num_base_bdevs_operational": 1, 00:20:55.861 "base_bdevs_list": [ 00:20:55.861 { 00:20:55.861 "name": null, 00:20:55.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.861 "is_configured": false, 00:20:55.861 "data_offset": 0, 00:20:55.861 "data_size": 65536 00:20:55.861 }, 00:20:55.861 { 00:20:55.861 "name": "BaseBdev2", 00:20:55.861 "uuid": "dab4c5f1-7e26-405e-bc9c-c6574cd48c67", 00:20:55.861 "is_configured": true, 00:20:55.861 "data_offset": 0, 00:20:55.861 "data_size": 65536 00:20:55.861 } 00:20:55.861 ] 00:20:55.861 }' 00:20:55.861 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.861 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.428 [2024-11-20 07:18:20.540155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.428 [2024-11-20 07:18:20.540226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60841 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60841 ']' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60841 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60841 00:20:56.428 killing process with pid 60841 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60841' 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60841 00:20:56.428 [2024-11-20 07:18:20.710131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.428 07:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60841 00:20:56.687 [2024-11-20 07:18:20.725365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:57.621 00:20:57.621 real 0m5.449s 00:20:57.621 user 0m8.178s 00:20:57.621 sys 0m0.781s 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.621 ************************************ 00:20:57.621 END TEST raid_state_function_test 00:20:57.621 ************************************ 00:20:57.621 07:18:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:20:57.621 07:18:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:57.621 07:18:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.621 07:18:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.621 ************************************ 00:20:57.621 START TEST raid_state_function_test_sb 00:20:57.621 ************************************ 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:57.621 Process raid pid: 61094 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61094 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61094' 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61094 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61094 ']' 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.621 07:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.880 [2024-11-20 07:18:21.944052] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:57.880 [2024-11-20 07:18:21.944260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.880 [2024-11-20 07:18:22.130947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.138 [2024-11-20 07:18:22.265563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.396 [2024-11-20 07:18:22.474949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.396 [2024-11-20 07:18:22.475303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.962 [2024-11-20 07:18:22.961889] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:58.962 [2024-11-20 07:18:22.961965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:58.962 [2024-11-20 07:18:22.961997] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:58.962 [2024-11-20 07:18:22.962027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.962 07:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.962 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.962 "name": "Existed_Raid", 00:20:58.962 "uuid": "1c5b33bd-26c4-4e0b-b0e6-80ab6ccadc73", 00:20:58.962 "strip_size_kb": 64, 00:20:58.962 "state": "configuring", 00:20:58.962 "raid_level": "raid0", 00:20:58.962 "superblock": true, 00:20:58.962 "num_base_bdevs": 2, 00:20:58.962 "num_base_bdevs_discovered": 0, 00:20:58.962 "num_base_bdevs_operational": 2, 00:20:58.962 "base_bdevs_list": [ 00:20:58.962 { 00:20:58.962 "name": "BaseBdev1", 00:20:58.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.962 "is_configured": false, 00:20:58.962 "data_offset": 0, 00:20:58.962 "data_size": 0 00:20:58.962 }, 00:20:58.962 { 00:20:58.962 "name": "BaseBdev2", 00:20:58.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.962 "is_configured": false, 00:20:58.962 "data_offset": 0, 00:20:58.962 "data_size": 0 00:20:58.962 } 00:20:58.962 ] 00:20:58.962 }' 00:20:58.962 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.962 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.530 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.530 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.530 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.530 [2024-11-20 07:18:23.529980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.531 [2024-11-20 07:18:23.530034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 [2024-11-20 07:18:23.537924] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:59.531 [2024-11-20 07:18:23.538034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:59.531 [2024-11-20 07:18:23.538048] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.531 [2024-11-20 07:18:23.538066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 [2024-11-20 07:18:23.582983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.531 BaseBdev1 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 [ 00:20:59.531 { 00:20:59.531 "name": "BaseBdev1", 00:20:59.531 "aliases": [ 00:20:59.531 "34419beb-92fe-4d38-b43e-7cc3351082c4" 00:20:59.531 ], 00:20:59.531 "product_name": "Malloc disk", 00:20:59.531 "block_size": 512, 00:20:59.531 "num_blocks": 65536, 00:20:59.531 "uuid": "34419beb-92fe-4d38-b43e-7cc3351082c4", 00:20:59.531 "assigned_rate_limits": { 00:20:59.531 "rw_ios_per_sec": 0, 00:20:59.531 "rw_mbytes_per_sec": 0, 00:20:59.531 "r_mbytes_per_sec": 0, 00:20:59.531 "w_mbytes_per_sec": 0 00:20:59.531 }, 00:20:59.531 "claimed": true, 00:20:59.531 "claim_type": "exclusive_write", 00:20:59.531 "zoned": false, 00:20:59.531 "supported_io_types": { 00:20:59.531 "read": true, 00:20:59.531 "write": true, 00:20:59.531 "unmap": true, 00:20:59.531 "flush": true, 00:20:59.531 "reset": true, 00:20:59.531 "nvme_admin": false, 00:20:59.531 "nvme_io": false, 00:20:59.531 "nvme_io_md": false, 00:20:59.531 "write_zeroes": true, 00:20:59.531 "zcopy": true, 00:20:59.531 "get_zone_info": false, 00:20:59.531 "zone_management": false, 00:20:59.531 "zone_append": false, 00:20:59.531 "compare": false, 00:20:59.531 "compare_and_write": false, 00:20:59.531 "abort": true, 00:20:59.531 "seek_hole": false, 00:20:59.531 "seek_data": false, 00:20:59.531 "copy": true, 00:20:59.531 "nvme_iov_md": false 00:20:59.531 }, 00:20:59.531 "memory_domains": [ 00:20:59.531 { 00:20:59.531 "dma_device_id": "system", 00:20:59.531 "dma_device_type": 1 00:20:59.531 }, 00:20:59.531 { 00:20:59.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.531 "dma_device_type": 2 00:20:59.531 } 00:20:59.531 ], 00:20:59.531 "driver_specific": {} 00:20:59.531 } 00:20:59.531 ] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.531 "name": "Existed_Raid", 00:20:59.531 "uuid": "fbc18b11-d073-4902-9c96-678416ee9d4e", 00:20:59.531 "strip_size_kb": 64, 00:20:59.531 "state": "configuring", 00:20:59.531 "raid_level": "raid0", 00:20:59.531 "superblock": true, 00:20:59.531 "num_base_bdevs": 2, 00:20:59.531 "num_base_bdevs_discovered": 1, 00:20:59.531 "num_base_bdevs_operational": 2, 00:20:59.531 "base_bdevs_list": [ 00:20:59.531 { 00:20:59.531 "name": "BaseBdev1", 00:20:59.531 "uuid": "34419beb-92fe-4d38-b43e-7cc3351082c4", 00:20:59.531 "is_configured": true, 00:20:59.531 "data_offset": 2048, 00:20:59.531 "data_size": 63488 00:20:59.531 }, 00:20:59.531 { 00:20:59.531 "name": "BaseBdev2", 00:20:59.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.531 "is_configured": false, 00:20:59.531 "data_offset": 0, 00:20:59.531 "data_size": 0 00:20:59.531 } 00:20:59.531 ] 00:20:59.531 }' 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.531 07:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.098 [2024-11-20 07:18:24.151372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.098 [2024-11-20 07:18:24.151680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.098 [2024-11-20 07:18:24.163465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.098 [2024-11-20 07:18:24.166310] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:00.098 [2024-11-20 07:18:24.166390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.098 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.099 "name": "Existed_Raid", 00:21:00.099 "uuid": "0bc353a6-f552-4d0d-82be-dbf71d7e1a89", 00:21:00.099 "strip_size_kb": 64, 00:21:00.099 "state": "configuring", 00:21:00.099 "raid_level": "raid0", 00:21:00.099 "superblock": true, 00:21:00.099 "num_base_bdevs": 2, 00:21:00.099 "num_base_bdevs_discovered": 1, 00:21:00.099 "num_base_bdevs_operational": 2, 00:21:00.099 "base_bdevs_list": [ 00:21:00.099 { 00:21:00.099 "name": "BaseBdev1", 00:21:00.099 "uuid": "34419beb-92fe-4d38-b43e-7cc3351082c4", 00:21:00.099 "is_configured": true, 00:21:00.099 "data_offset": 2048, 00:21:00.099 "data_size": 63488 00:21:00.099 }, 00:21:00.099 { 00:21:00.099 "name": "BaseBdev2", 00:21:00.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.099 "is_configured": false, 00:21:00.099 "data_offset": 0, 00:21:00.099 "data_size": 0 00:21:00.099 } 00:21:00.099 ] 00:21:00.099 }' 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.099 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.666 [2024-11-20 07:18:24.723042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.666 [2024-11-20 07:18:24.723615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:00.666 BaseBdev2 00:21:00.666 [2024-11-20 07:18:24.723758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:00.666 [2024-11-20 07:18:24.724109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:00.666 [2024-11-20 07:18:24.724307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:00.666 [2024-11-20 07:18:24.724328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:00.666 [2024-11-20 07:18:24.724506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.666 [ 00:21:00.666 { 00:21:00.666 "name": "BaseBdev2", 00:21:00.666 "aliases": [ 00:21:00.666 "24ad3510-27ea-4622-a6b2-337c011ab92f" 00:21:00.666 ], 00:21:00.666 "product_name": "Malloc disk", 00:21:00.666 "block_size": 512, 00:21:00.666 "num_blocks": 65536, 00:21:00.666 "uuid": "24ad3510-27ea-4622-a6b2-337c011ab92f", 00:21:00.666 "assigned_rate_limits": { 00:21:00.666 "rw_ios_per_sec": 0, 00:21:00.666 "rw_mbytes_per_sec": 0, 00:21:00.666 "r_mbytes_per_sec": 0, 00:21:00.666 "w_mbytes_per_sec": 0 00:21:00.666 }, 00:21:00.666 "claimed": true, 00:21:00.666 "claim_type": "exclusive_write", 00:21:00.666 "zoned": false, 00:21:00.666 "supported_io_types": { 00:21:00.666 "read": true, 00:21:00.666 "write": true, 00:21:00.666 "unmap": true, 00:21:00.666 "flush": true, 00:21:00.666 "reset": true, 00:21:00.666 "nvme_admin": false, 00:21:00.666 "nvme_io": false, 00:21:00.666 "nvme_io_md": false, 00:21:00.666 "write_zeroes": true, 00:21:00.666 "zcopy": true, 00:21:00.666 "get_zone_info": false, 00:21:00.666 "zone_management": false, 00:21:00.666 "zone_append": false, 00:21:00.666 "compare": false, 00:21:00.666 "compare_and_write": false, 00:21:00.666 "abort": true, 00:21:00.666 "seek_hole": false, 00:21:00.666 "seek_data": false, 00:21:00.666 "copy": true, 00:21:00.666 "nvme_iov_md": false 00:21:00.666 }, 00:21:00.666 "memory_domains": [ 00:21:00.666 { 00:21:00.666 "dma_device_id": "system", 00:21:00.666 "dma_device_type": 1 00:21:00.666 }, 00:21:00.666 { 00:21:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.666 "dma_device_type": 2 00:21:00.666 } 00:21:00.666 ], 00:21:00.666 "driver_specific": {} 00:21:00.666 } 00:21:00.666 ] 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.666 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.667 "name": "Existed_Raid", 00:21:00.667 "uuid": "0bc353a6-f552-4d0d-82be-dbf71d7e1a89", 00:21:00.667 "strip_size_kb": 64, 00:21:00.667 "state": "online", 00:21:00.667 "raid_level": "raid0", 00:21:00.667 "superblock": true, 00:21:00.667 "num_base_bdevs": 2, 00:21:00.667 "num_base_bdevs_discovered": 2, 00:21:00.667 "num_base_bdevs_operational": 2, 00:21:00.667 "base_bdevs_list": [ 00:21:00.667 { 00:21:00.667 "name": "BaseBdev1", 00:21:00.667 "uuid": "34419beb-92fe-4d38-b43e-7cc3351082c4", 00:21:00.667 "is_configured": true, 00:21:00.667 "data_offset": 2048, 00:21:00.667 "data_size": 63488 00:21:00.667 }, 00:21:00.667 { 00:21:00.667 "name": "BaseBdev2", 00:21:00.667 "uuid": "24ad3510-27ea-4622-a6b2-337c011ab92f", 00:21:00.667 "is_configured": true, 00:21:00.667 "data_offset": 2048, 00:21:00.667 "data_size": 63488 00:21:00.667 } 00:21:00.667 ] 00:21:00.667 }' 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.667 07:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.234 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.235 [2024-11-20 07:18:25.275716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.235 "name": "Existed_Raid", 00:21:01.235 "aliases": [ 00:21:01.235 "0bc353a6-f552-4d0d-82be-dbf71d7e1a89" 00:21:01.235 ], 00:21:01.235 "product_name": "Raid Volume", 00:21:01.235 "block_size": 512, 00:21:01.235 "num_blocks": 126976, 00:21:01.235 "uuid": "0bc353a6-f552-4d0d-82be-dbf71d7e1a89", 00:21:01.235 "assigned_rate_limits": { 00:21:01.235 "rw_ios_per_sec": 0, 00:21:01.235 "rw_mbytes_per_sec": 0, 00:21:01.235 "r_mbytes_per_sec": 0, 00:21:01.235 "w_mbytes_per_sec": 0 00:21:01.235 }, 00:21:01.235 "claimed": false, 00:21:01.235 "zoned": false, 00:21:01.235 "supported_io_types": { 00:21:01.235 "read": true, 00:21:01.235 "write": true, 00:21:01.235 "unmap": true, 00:21:01.235 "flush": true, 00:21:01.235 "reset": true, 00:21:01.235 "nvme_admin": false, 00:21:01.235 "nvme_io": false, 00:21:01.235 "nvme_io_md": false, 00:21:01.235 "write_zeroes": true, 00:21:01.235 "zcopy": false, 00:21:01.235 "get_zone_info": false, 00:21:01.235 "zone_management": false, 00:21:01.235 "zone_append": false, 00:21:01.235 "compare": false, 00:21:01.235 "compare_and_write": false, 00:21:01.235 "abort": false, 00:21:01.235 "seek_hole": false, 00:21:01.235 "seek_data": false, 00:21:01.235 "copy": false, 00:21:01.235 "nvme_iov_md": false 00:21:01.235 }, 00:21:01.235 "memory_domains": [ 00:21:01.235 { 00:21:01.235 "dma_device_id": "system", 00:21:01.235 "dma_device_type": 1 00:21:01.235 }, 00:21:01.235 { 00:21:01.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.235 "dma_device_type": 2 00:21:01.235 }, 00:21:01.235 { 00:21:01.235 "dma_device_id": "system", 00:21:01.235 "dma_device_type": 1 00:21:01.235 }, 00:21:01.235 { 00:21:01.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.235 "dma_device_type": 2 00:21:01.235 } 00:21:01.235 ], 00:21:01.235 "driver_specific": { 00:21:01.235 "raid": { 00:21:01.235 "uuid": "0bc353a6-f552-4d0d-82be-dbf71d7e1a89", 00:21:01.235 "strip_size_kb": 64, 00:21:01.235 "state": "online", 00:21:01.235 "raid_level": "raid0", 00:21:01.235 "superblock": true, 00:21:01.235 "num_base_bdevs": 2, 00:21:01.235 "num_base_bdevs_discovered": 2, 00:21:01.235 "num_base_bdevs_operational": 2, 00:21:01.235 "base_bdevs_list": [ 00:21:01.235 { 00:21:01.235 "name": "BaseBdev1", 00:21:01.235 "uuid": "34419beb-92fe-4d38-b43e-7cc3351082c4", 00:21:01.235 "is_configured": true, 00:21:01.235 "data_offset": 2048, 00:21:01.235 "data_size": 63488 00:21:01.235 }, 00:21:01.235 { 00:21:01.235 "name": "BaseBdev2", 00:21:01.235 "uuid": "24ad3510-27ea-4622-a6b2-337c011ab92f", 00:21:01.235 "is_configured": true, 00:21:01.235 "data_offset": 2048, 00:21:01.235 "data_size": 63488 00:21:01.235 } 00:21:01.235 ] 00:21:01.235 } 00:21:01.235 } 00:21:01.235 }' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:01.235 BaseBdev2' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.235 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.495 [2024-11-20 07:18:25.539451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:01.495 [2024-11-20 07:18:25.539681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.495 [2024-11-20 07:18:25.539775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.495 "name": "Existed_Raid", 00:21:01.495 "uuid": "0bc353a6-f552-4d0d-82be-dbf71d7e1a89", 00:21:01.495 "strip_size_kb": 64, 00:21:01.495 "state": "offline", 00:21:01.495 "raid_level": "raid0", 00:21:01.495 "superblock": true, 00:21:01.495 "num_base_bdevs": 2, 00:21:01.495 "num_base_bdevs_discovered": 1, 00:21:01.495 "num_base_bdevs_operational": 1, 00:21:01.495 "base_bdevs_list": [ 00:21:01.495 { 00:21:01.495 "name": null, 00:21:01.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.495 "is_configured": false, 00:21:01.495 "data_offset": 0, 00:21:01.495 "data_size": 63488 00:21:01.495 }, 00:21:01.495 { 00:21:01.495 "name": "BaseBdev2", 00:21:01.495 "uuid": "24ad3510-27ea-4622-a6b2-337c011ab92f", 00:21:01.495 "is_configured": true, 00:21:01.495 "data_offset": 2048, 00:21:01.495 "data_size": 63488 00:21:01.495 } 00:21:01.495 ] 00:21:01.495 }' 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.495 07:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.062 [2024-11-20 07:18:26.215000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.062 [2024-11-20 07:18:26.215257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.062 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61094 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61094 ']' 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61094 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61094 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.320 killing process with pid 61094 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61094' 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61094 00:21:02.320 [2024-11-20 07:18:26.391050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.320 07:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61094 00:21:02.320 [2024-11-20 07:18:26.406925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:03.258 07:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:03.258 00:21:03.258 real 0m5.650s 00:21:03.258 user 0m8.532s 00:21:03.258 sys 0m0.844s 00:21:03.258 07:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.258 ************************************ 00:21:03.258 END TEST raid_state_function_test_sb 00:21:03.258 ************************************ 00:21:03.258 07:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.258 07:18:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:21:03.258 07:18:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:03.258 07:18:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.258 07:18:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.258 ************************************ 00:21:03.258 START TEST raid_superblock_test 00:21:03.258 ************************************ 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61357 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61357 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61357 ']' 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.258 07:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.517 [2024-11-20 07:18:27.619442] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:03.517 [2024-11-20 07:18:27.620263] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61357 ] 00:21:03.517 [2024-11-20 07:18:27.796029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.777 [2024-11-20 07:18:27.929060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.036 [2024-11-20 07:18:28.144391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.036 [2024-11-20 07:18:28.144450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.603 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 malloc1 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 [2024-11-20 07:18:28.733564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:04.604 [2024-11-20 07:18:28.733677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.604 [2024-11-20 07:18:28.733727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:04.604 [2024-11-20 07:18:28.733744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.604 [2024-11-20 07:18:28.736728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.604 [2024-11-20 07:18:28.736774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:04.604 pt1 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 malloc2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 [2024-11-20 07:18:28.788377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:04.604 [2024-11-20 07:18:28.788480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.604 [2024-11-20 07:18:28.788515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:04.604 [2024-11-20 07:18:28.788529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.604 [2024-11-20 07:18:28.791580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.604 [2024-11-20 07:18:28.791843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:04.604 pt2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 [2024-11-20 07:18:28.800651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:04.604 [2024-11-20 07:18:28.803431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.604 [2024-11-20 07:18:28.803852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:04.604 [2024-11-20 07:18:28.803879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:04.604 [2024-11-20 07:18:28.804289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:04.604 [2024-11-20 07:18:28.804510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:04.604 [2024-11-20 07:18:28.804533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:04.604 [2024-11-20 07:18:28.804840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.604 "name": "raid_bdev1", 00:21:04.604 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:04.604 "strip_size_kb": 64, 00:21:04.604 "state": "online", 00:21:04.604 "raid_level": "raid0", 00:21:04.604 "superblock": true, 00:21:04.604 "num_base_bdevs": 2, 00:21:04.604 "num_base_bdevs_discovered": 2, 00:21:04.604 "num_base_bdevs_operational": 2, 00:21:04.604 "base_bdevs_list": [ 00:21:04.604 { 00:21:04.604 "name": "pt1", 00:21:04.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.604 "is_configured": true, 00:21:04.604 "data_offset": 2048, 00:21:04.604 "data_size": 63488 00:21:04.604 }, 00:21:04.604 { 00:21:04.604 "name": "pt2", 00:21:04.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.604 "is_configured": true, 00:21:04.604 "data_offset": 2048, 00:21:04.604 "data_size": 63488 00:21:04.604 } 00:21:04.604 ] 00:21:04.604 }' 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.604 07:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.173 [2024-11-20 07:18:29.369294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.173 "name": "raid_bdev1", 00:21:05.173 "aliases": [ 00:21:05.173 "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb" 00:21:05.173 ], 00:21:05.173 "product_name": "Raid Volume", 00:21:05.173 "block_size": 512, 00:21:05.173 "num_blocks": 126976, 00:21:05.173 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:05.173 "assigned_rate_limits": { 00:21:05.173 "rw_ios_per_sec": 0, 00:21:05.173 "rw_mbytes_per_sec": 0, 00:21:05.173 "r_mbytes_per_sec": 0, 00:21:05.173 "w_mbytes_per_sec": 0 00:21:05.173 }, 00:21:05.173 "claimed": false, 00:21:05.173 "zoned": false, 00:21:05.173 "supported_io_types": { 00:21:05.173 "read": true, 00:21:05.173 "write": true, 00:21:05.173 "unmap": true, 00:21:05.173 "flush": true, 00:21:05.173 "reset": true, 00:21:05.173 "nvme_admin": false, 00:21:05.173 "nvme_io": false, 00:21:05.173 "nvme_io_md": false, 00:21:05.173 "write_zeroes": true, 00:21:05.173 "zcopy": false, 00:21:05.173 "get_zone_info": false, 00:21:05.173 "zone_management": false, 00:21:05.173 "zone_append": false, 00:21:05.173 "compare": false, 00:21:05.173 "compare_and_write": false, 00:21:05.173 "abort": false, 00:21:05.173 "seek_hole": false, 00:21:05.173 "seek_data": false, 00:21:05.173 "copy": false, 00:21:05.173 "nvme_iov_md": false 00:21:05.173 }, 00:21:05.173 "memory_domains": [ 00:21:05.173 { 00:21:05.173 "dma_device_id": "system", 00:21:05.173 "dma_device_type": 1 00:21:05.173 }, 00:21:05.173 { 00:21:05.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.173 "dma_device_type": 2 00:21:05.173 }, 00:21:05.173 { 00:21:05.173 "dma_device_id": "system", 00:21:05.173 "dma_device_type": 1 00:21:05.173 }, 00:21:05.173 { 00:21:05.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.173 "dma_device_type": 2 00:21:05.173 } 00:21:05.173 ], 00:21:05.173 "driver_specific": { 00:21:05.173 "raid": { 00:21:05.173 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:05.173 "strip_size_kb": 64, 00:21:05.173 "state": "online", 00:21:05.173 "raid_level": "raid0", 00:21:05.173 "superblock": true, 00:21:05.173 "num_base_bdevs": 2, 00:21:05.173 "num_base_bdevs_discovered": 2, 00:21:05.173 "num_base_bdevs_operational": 2, 00:21:05.173 "base_bdevs_list": [ 00:21:05.173 { 00:21:05.173 "name": "pt1", 00:21:05.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.173 "is_configured": true, 00:21:05.173 "data_offset": 2048, 00:21:05.173 "data_size": 63488 00:21:05.173 }, 00:21:05.173 { 00:21:05.173 "name": "pt2", 00:21:05.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.173 "is_configured": true, 00:21:05.173 "data_offset": 2048, 00:21:05.173 "data_size": 63488 00:21:05.173 } 00:21:05.173 ] 00:21:05.173 } 00:21:05.173 } 00:21:05.173 }' 00:21:05.173 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:05.433 pt2' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.433 [2024-11-20 07:18:29.645455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb ']' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.433 [2024-11-20 07:18:29.697031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.433 [2024-11-20 07:18:29.697067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.433 [2024-11-20 07:18:29.697196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.433 [2024-11-20 07:18:29.697321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.433 [2024-11-20 07:18:29.697343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.433 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.693 [2024-11-20 07:18:29.841115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:05.693 [2024-11-20 07:18:29.843926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:05.693 [2024-11-20 07:18:29.844034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:05.693 [2024-11-20 07:18:29.844114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:05.693 [2024-11-20 07:18:29.844141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.693 [2024-11-20 07:18:29.844161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:05.693 request: 00:21:05.693 { 00:21:05.693 "name": "raid_bdev1", 00:21:05.693 "raid_level": "raid0", 00:21:05.693 "base_bdevs": [ 00:21:05.693 "malloc1", 00:21:05.693 "malloc2" 00:21:05.693 ], 00:21:05.693 "strip_size_kb": 64, 00:21:05.693 "superblock": false, 00:21:05.693 "method": "bdev_raid_create", 00:21:05.693 "req_id": 1 00:21:05.693 } 00:21:05.693 Got JSON-RPC error response 00:21:05.693 response: 00:21:05.693 { 00:21:05.693 "code": -17, 00:21:05.693 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:05.693 } 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.693 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.693 [2024-11-20 07:18:29.905114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:05.693 [2024-11-20 07:18:29.905334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.693 [2024-11-20 07:18:29.905410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:05.693 [2024-11-20 07:18:29.905532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.693 [2024-11-20 07:18:29.908636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.693 [2024-11-20 07:18:29.908819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:05.693 [2024-11-20 07:18:29.909095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:05.693 [2024-11-20 07:18:29.909187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:05.693 pt1 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.694 "name": "raid_bdev1", 00:21:05.694 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:05.694 "strip_size_kb": 64, 00:21:05.694 "state": "configuring", 00:21:05.694 "raid_level": "raid0", 00:21:05.694 "superblock": true, 00:21:05.694 "num_base_bdevs": 2, 00:21:05.694 "num_base_bdevs_discovered": 1, 00:21:05.694 "num_base_bdevs_operational": 2, 00:21:05.694 "base_bdevs_list": [ 00:21:05.694 { 00:21:05.694 "name": "pt1", 00:21:05.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.694 "is_configured": true, 00:21:05.694 "data_offset": 2048, 00:21:05.694 "data_size": 63488 00:21:05.694 }, 00:21:05.694 { 00:21:05.694 "name": null, 00:21:05.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.694 "is_configured": false, 00:21:05.694 "data_offset": 2048, 00:21:05.694 "data_size": 63488 00:21:05.694 } 00:21:05.694 ] 00:21:05.694 }' 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.694 07:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.262 [2024-11-20 07:18:30.417373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:06.262 [2024-11-20 07:18:30.417490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.262 [2024-11-20 07:18:30.417523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:06.262 [2024-11-20 07:18:30.417541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.262 [2024-11-20 07:18:30.418153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.262 [2024-11-20 07:18:30.418231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:06.262 [2024-11-20 07:18:30.418363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:06.262 [2024-11-20 07:18:30.418398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:06.262 [2024-11-20 07:18:30.418578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:06.262 [2024-11-20 07:18:30.418614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:06.262 [2024-11-20 07:18:30.418913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:06.262 [2024-11-20 07:18:30.419249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:06.262 [2024-11-20 07:18:30.419274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:06.262 [2024-11-20 07:18:30.419473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.262 pt2 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.262 "name": "raid_bdev1", 00:21:06.262 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:06.262 "strip_size_kb": 64, 00:21:06.262 "state": "online", 00:21:06.262 "raid_level": "raid0", 00:21:06.262 "superblock": true, 00:21:06.262 "num_base_bdevs": 2, 00:21:06.262 "num_base_bdevs_discovered": 2, 00:21:06.262 "num_base_bdevs_operational": 2, 00:21:06.262 "base_bdevs_list": [ 00:21:06.262 { 00:21:06.262 "name": "pt1", 00:21:06.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:06.262 "is_configured": true, 00:21:06.262 "data_offset": 2048, 00:21:06.262 "data_size": 63488 00:21:06.262 }, 00:21:06.262 { 00:21:06.262 "name": "pt2", 00:21:06.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.262 "is_configured": true, 00:21:06.262 "data_offset": 2048, 00:21:06.262 "data_size": 63488 00:21:06.262 } 00:21:06.262 ] 00:21:06.262 }' 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.262 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.831 [2024-11-20 07:18:30.957908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.831 07:18:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:06.831 "name": "raid_bdev1", 00:21:06.831 "aliases": [ 00:21:06.831 "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb" 00:21:06.831 ], 00:21:06.831 "product_name": "Raid Volume", 00:21:06.831 "block_size": 512, 00:21:06.831 "num_blocks": 126976, 00:21:06.831 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:06.831 "assigned_rate_limits": { 00:21:06.831 "rw_ios_per_sec": 0, 00:21:06.831 "rw_mbytes_per_sec": 0, 00:21:06.831 "r_mbytes_per_sec": 0, 00:21:06.831 "w_mbytes_per_sec": 0 00:21:06.831 }, 00:21:06.831 "claimed": false, 00:21:06.831 "zoned": false, 00:21:06.831 "supported_io_types": { 00:21:06.831 "read": true, 00:21:06.831 "write": true, 00:21:06.831 "unmap": true, 00:21:06.831 "flush": true, 00:21:06.831 "reset": true, 00:21:06.831 "nvme_admin": false, 00:21:06.831 "nvme_io": false, 00:21:06.831 "nvme_io_md": false, 00:21:06.831 "write_zeroes": true, 00:21:06.831 "zcopy": false, 00:21:06.831 "get_zone_info": false, 00:21:06.831 "zone_management": false, 00:21:06.831 "zone_append": false, 00:21:06.831 "compare": false, 00:21:06.831 "compare_and_write": false, 00:21:06.831 "abort": false, 00:21:06.831 "seek_hole": false, 00:21:06.831 "seek_data": false, 00:21:06.831 "copy": false, 00:21:06.831 "nvme_iov_md": false 00:21:06.831 }, 00:21:06.831 "memory_domains": [ 00:21:06.831 { 00:21:06.831 "dma_device_id": "system", 00:21:06.831 "dma_device_type": 1 00:21:06.831 }, 00:21:06.831 { 00:21:06.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.831 "dma_device_type": 2 00:21:06.831 }, 00:21:06.831 { 00:21:06.831 "dma_device_id": "system", 00:21:06.831 "dma_device_type": 1 00:21:06.831 }, 00:21:06.831 { 00:21:06.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.831 "dma_device_type": 2 00:21:06.831 } 00:21:06.831 ], 00:21:06.831 "driver_specific": { 00:21:06.831 "raid": { 00:21:06.831 "uuid": "a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb", 00:21:06.831 "strip_size_kb": 64, 00:21:06.831 "state": "online", 00:21:06.831 "raid_level": "raid0", 00:21:06.831 "superblock": true, 00:21:06.831 "num_base_bdevs": 2, 00:21:06.831 "num_base_bdevs_discovered": 2, 00:21:06.831 "num_base_bdevs_operational": 2, 00:21:06.831 "base_bdevs_list": [ 00:21:06.831 { 00:21:06.831 "name": "pt1", 00:21:06.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:06.831 "is_configured": true, 00:21:06.831 "data_offset": 2048, 00:21:06.831 "data_size": 63488 00:21:06.831 }, 00:21:06.831 { 00:21:06.831 "name": "pt2", 00:21:06.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.831 "is_configured": true, 00:21:06.831 "data_offset": 2048, 00:21:06.831 "data_size": 63488 00:21:06.831 } 00:21:06.831 ] 00:21:06.831 } 00:21:06.831 } 00:21:06.831 }' 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:06.831 pt2' 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.831 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.090 [2024-11-20 07:18:31.230057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb '!=' a2776ef2-3b90-4c79-ae7c-cb80cf2c64cb ']' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61357 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61357 ']' 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61357 00:21:07.090 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61357 00:21:07.091 killing process with pid 61357 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61357' 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61357 00:21:07.091 [2024-11-20 07:18:31.310813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:07.091 07:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61357 00:21:07.091 [2024-11-20 07:18:31.310948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.091 [2024-11-20 07:18:31.311044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.091 [2024-11-20 07:18:31.311063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:07.349 [2024-11-20 07:18:31.502455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:08.726 07:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:08.726 00:21:08.726 real 0m5.081s 00:21:08.726 user 0m7.500s 00:21:08.726 sys 0m0.751s 00:21:08.726 07:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.726 07:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.726 ************************************ 00:21:08.727 END TEST raid_superblock_test 00:21:08.727 ************************************ 00:21:08.727 07:18:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:21:08.727 07:18:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:08.727 07:18:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.727 07:18:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:08.727 ************************************ 00:21:08.727 START TEST raid_read_error_test 00:21:08.727 ************************************ 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VaUv2ST05b 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61574 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61574 00:21:08.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61574 ']' 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.727 07:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.727 [2024-11-20 07:18:32.785227] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:08.727 [2024-11-20 07:18:32.785426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61574 ] 00:21:08.727 [2024-11-20 07:18:32.978481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.986 [2024-11-20 07:18:33.143058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.244 [2024-11-20 07:18:33.401513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:09.244 [2024-11-20 07:18:33.401618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 BaseBdev1_malloc 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 true 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 [2024-11-20 07:18:33.937293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:09.810 [2024-11-20 07:18:33.937548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.810 [2024-11-20 07:18:33.937622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:09.810 [2024-11-20 07:18:33.937659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.810 [2024-11-20 07:18:33.941134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.810 [2024-11-20 07:18:33.941211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:09.810 BaseBdev1 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 BaseBdev2_malloc 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 true 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 [2024-11-20 07:18:33.999612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:09.810 [2024-11-20 07:18:33.999868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.810 [2024-11-20 07:18:33.999915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:09.810 [2024-11-20 07:18:33.999941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.810 [2024-11-20 07:18:34.003658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.810 [2024-11-20 07:18:34.003722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:09.810 BaseBdev2 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 [2024-11-20 07:18:34.008012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.810 [2024-11-20 07:18:34.010899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.810 [2024-11-20 07:18:34.011409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:09.810 [2024-11-20 07:18:34.011450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:09.810 [2024-11-20 07:18:34.011849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:09.810 [2024-11-20 07:18:34.012220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:09.810 [2024-11-20 07:18:34.012245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:09.810 [2024-11-20 07:18:34.012547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.810 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.810 "name": "raid_bdev1", 00:21:09.810 "uuid": "111f83d7-afa6-41f0-9120-0132e8bcdaa8", 00:21:09.810 "strip_size_kb": 64, 00:21:09.810 "state": "online", 00:21:09.810 "raid_level": "raid0", 00:21:09.810 "superblock": true, 00:21:09.810 "num_base_bdevs": 2, 00:21:09.810 "num_base_bdevs_discovered": 2, 00:21:09.810 "num_base_bdevs_operational": 2, 00:21:09.810 "base_bdevs_list": [ 00:21:09.810 { 00:21:09.810 "name": "BaseBdev1", 00:21:09.810 "uuid": "8e4bd2cd-5f4b-5c1c-976b-645d3655c119", 00:21:09.810 "is_configured": true, 00:21:09.810 "data_offset": 2048, 00:21:09.810 "data_size": 63488 00:21:09.810 }, 00:21:09.810 { 00:21:09.810 "name": "BaseBdev2", 00:21:09.810 "uuid": "f3cc4ead-6e84-58ed-bc06-e3e1db71b0c7", 00:21:09.810 "is_configured": true, 00:21:09.810 "data_offset": 2048, 00:21:09.811 "data_size": 63488 00:21:09.811 } 00:21:09.811 ] 00:21:09.811 }' 00:21:09.811 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.811 07:18:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.376 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:10.376 07:18:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:10.633 [2024-11-20 07:18:34.714173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.565 "name": "raid_bdev1", 00:21:11.565 "uuid": "111f83d7-afa6-41f0-9120-0132e8bcdaa8", 00:21:11.565 "strip_size_kb": 64, 00:21:11.565 "state": "online", 00:21:11.565 "raid_level": "raid0", 00:21:11.565 "superblock": true, 00:21:11.565 "num_base_bdevs": 2, 00:21:11.565 "num_base_bdevs_discovered": 2, 00:21:11.565 "num_base_bdevs_operational": 2, 00:21:11.565 "base_bdevs_list": [ 00:21:11.565 { 00:21:11.565 "name": "BaseBdev1", 00:21:11.565 "uuid": "8e4bd2cd-5f4b-5c1c-976b-645d3655c119", 00:21:11.565 "is_configured": true, 00:21:11.565 "data_offset": 2048, 00:21:11.565 "data_size": 63488 00:21:11.565 }, 00:21:11.565 { 00:21:11.565 "name": "BaseBdev2", 00:21:11.565 "uuid": "f3cc4ead-6e84-58ed-bc06-e3e1db71b0c7", 00:21:11.565 "is_configured": true, 00:21:11.565 "data_offset": 2048, 00:21:11.565 "data_size": 63488 00:21:11.565 } 00:21:11.565 ] 00:21:11.565 }' 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.565 07:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.824 07:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:11.824 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.824 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.082 [2024-11-20 07:18:36.113607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.082 [2024-11-20 07:18:36.113659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:12.082 [2024-11-20 07:18:36.117532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.082 [2024-11-20 07:18:36.117694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.082 { 00:21:12.082 "results": [ 00:21:12.082 { 00:21:12.082 "job": "raid_bdev1", 00:21:12.082 "core_mask": "0x1", 00:21:12.082 "workload": "randrw", 00:21:12.082 "percentage": 50, 00:21:12.082 "status": "finished", 00:21:12.082 "queue_depth": 1, 00:21:12.082 "io_size": 131072, 00:21:12.082 "runtime": 1.396534, 00:21:12.082 "iops": 8945.002413117045, 00:21:12.082 "mibps": 1118.1253016396306, 00:21:12.082 "io_failed": 1, 00:21:12.082 "io_timeout": 0, 00:21:12.082 "avg_latency_us": 156.03929589661118, 00:21:12.082 "min_latency_us": 43.75272727272727, 00:21:12.082 "max_latency_us": 2561.8618181818183 00:21:12.082 } 00:21:12.082 ], 00:21:12.082 "core_count": 1 00:21:12.082 } 00:21:12.082 [2024-11-20 07:18:36.117762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.082 [2024-11-20 07:18:36.117793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61574 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61574 ']' 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61574 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61574 00:21:12.082 killing process with pid 61574 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61574' 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61574 00:21:12.082 07:18:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61574 00:21:12.082 [2024-11-20 07:18:36.154095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:12.082 [2024-11-20 07:18:36.307213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VaUv2ST05b 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:13.460 ************************************ 00:21:13.460 END TEST raid_read_error_test 00:21:13.460 ************************************ 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:21:13.460 00:21:13.460 real 0m4.791s 00:21:13.460 user 0m6.059s 00:21:13.460 sys 0m0.584s 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.460 07:18:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.460 07:18:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:21:13.460 07:18:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:13.460 07:18:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.460 07:18:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.460 ************************************ 00:21:13.460 START TEST raid_write_error_test 00:21:13.460 ************************************ 00:21:13.460 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:21:13.460 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:13.460 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kx90c59irS 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61720 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61720 00:21:13.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61720 ']' 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.461 07:18:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.461 [2024-11-20 07:18:37.622802] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:13.461 [2024-11-20 07:18:37.622966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61720 ] 00:21:13.720 [2024-11-20 07:18:37.797171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.720 [2024-11-20 07:18:37.935155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.979 [2024-11-20 07:18:38.153867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.979 [2024-11-20 07:18:38.153999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.546 BaseBdev1_malloc 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.546 true 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.546 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.546 [2024-11-20 07:18:38.679990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:14.547 [2024-11-20 07:18:38.680062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.547 [2024-11-20 07:18:38.680092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:14.547 [2024-11-20 07:18:38.680111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.547 [2024-11-20 07:18:38.683186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.547 [2024-11-20 07:18:38.683383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:14.547 BaseBdev1 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.547 BaseBdev2_malloc 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.547 true 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.547 [2024-11-20 07:18:38.747044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:14.547 [2024-11-20 07:18:38.747115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.547 [2024-11-20 07:18:38.747140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:14.547 [2024-11-20 07:18:38.747158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.547 [2024-11-20 07:18:38.750226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.547 [2024-11-20 07:18:38.750398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:14.547 BaseBdev2 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.547 [2024-11-20 07:18:38.759176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.547 [2024-11-20 07:18:38.761904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.547 [2024-11-20 07:18:38.762202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:14.547 [2024-11-20 07:18:38.762230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:14.547 [2024-11-20 07:18:38.762527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:14.547 [2024-11-20 07:18:38.762949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:14.547 [2024-11-20 07:18:38.763081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:14.547 [2024-11-20 07:18:38.763505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.547 "name": "raid_bdev1", 00:21:14.547 "uuid": "112ef189-db0b-4d00-aee5-a293aa58766d", 00:21:14.547 "strip_size_kb": 64, 00:21:14.547 "state": "online", 00:21:14.547 "raid_level": "raid0", 00:21:14.547 "superblock": true, 00:21:14.547 "num_base_bdevs": 2, 00:21:14.547 "num_base_bdevs_discovered": 2, 00:21:14.547 "num_base_bdevs_operational": 2, 00:21:14.547 "base_bdevs_list": [ 00:21:14.547 { 00:21:14.547 "name": "BaseBdev1", 00:21:14.547 "uuid": "ae803d50-595c-55ae-a5f2-d29e670c2925", 00:21:14.547 "is_configured": true, 00:21:14.547 "data_offset": 2048, 00:21:14.547 "data_size": 63488 00:21:14.547 }, 00:21:14.547 { 00:21:14.547 "name": "BaseBdev2", 00:21:14.547 "uuid": "fe72930c-57b3-526e-9804-2d37ae567545", 00:21:14.547 "is_configured": true, 00:21:14.547 "data_offset": 2048, 00:21:14.547 "data_size": 63488 00:21:14.547 } 00:21:14.547 ] 00:21:14.547 }' 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.547 07:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.116 07:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:15.116 07:18:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:15.375 [2024-11-20 07:18:39.417113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:16.007 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:16.007 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.007 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.272 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.272 "name": "raid_bdev1", 00:21:16.272 "uuid": "112ef189-db0b-4d00-aee5-a293aa58766d", 00:21:16.272 "strip_size_kb": 64, 00:21:16.272 "state": "online", 00:21:16.272 "raid_level": "raid0", 00:21:16.272 "superblock": true, 00:21:16.272 "num_base_bdevs": 2, 00:21:16.272 "num_base_bdevs_discovered": 2, 00:21:16.272 "num_base_bdevs_operational": 2, 00:21:16.272 "base_bdevs_list": [ 00:21:16.272 { 00:21:16.272 "name": "BaseBdev1", 00:21:16.272 "uuid": "ae803d50-595c-55ae-a5f2-d29e670c2925", 00:21:16.272 "is_configured": true, 00:21:16.272 "data_offset": 2048, 00:21:16.272 "data_size": 63488 00:21:16.272 }, 00:21:16.272 { 00:21:16.272 "name": "BaseBdev2", 00:21:16.272 "uuid": "fe72930c-57b3-526e-9804-2d37ae567545", 00:21:16.272 "is_configured": true, 00:21:16.272 "data_offset": 2048, 00:21:16.272 "data_size": 63488 00:21:16.272 } 00:21:16.272 ] 00:21:16.273 }' 00:21:16.273 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.273 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.841 [2024-11-20 07:18:40.844712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.841 [2024-11-20 07:18:40.844892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.841 [2024-11-20 07:18:40.848495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.841 [2024-11-20 07:18:40.848728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.841 { 00:21:16.841 "results": [ 00:21:16.841 { 00:21:16.841 "job": "raid_bdev1", 00:21:16.841 "core_mask": "0x1", 00:21:16.841 "workload": "randrw", 00:21:16.841 "percentage": 50, 00:21:16.841 "status": "finished", 00:21:16.841 "queue_depth": 1, 00:21:16.841 "io_size": 131072, 00:21:16.841 "runtime": 1.425266, 00:21:16.841 "iops": 10367.187598665792, 00:21:16.841 "mibps": 1295.898449833224, 00:21:16.841 "io_failed": 1, 00:21:16.841 "io_timeout": 0, 00:21:16.841 "avg_latency_us": 134.96518693054932, 00:21:16.841 "min_latency_us": 36.305454545454545, 00:21:16.841 "max_latency_us": 1891.6072727272726 00:21:16.841 } 00:21:16.841 ], 00:21:16.841 "core_count": 1 00:21:16.841 } 00:21:16.841 [2024-11-20 07:18:40.848826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.841 [2024-11-20 07:18:40.848853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61720 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61720 ']' 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61720 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61720 00:21:16.841 killing process with pid 61720 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61720' 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61720 00:21:16.841 [2024-11-20 07:18:40.886050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:16.841 07:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61720 00:21:16.841 [2024-11-20 07:18:41.009399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kx90c59irS 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:18.218 ************************************ 00:21:18.218 END TEST raid_write_error_test 00:21:18.218 ************************************ 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:21:18.218 00:21:18.218 real 0m4.626s 00:21:18.218 user 0m5.831s 00:21:18.218 sys 0m0.548s 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.218 07:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 07:18:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:18.218 07:18:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:21:18.218 07:18:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:18.218 07:18:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.218 07:18:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 ************************************ 00:21:18.218 START TEST raid_state_function_test 00:21:18.218 ************************************ 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:18.218 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:18.219 Process raid pid: 61862 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61862 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61862' 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61862 00:21:18.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61862 ']' 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.219 07:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.219 [2024-11-20 07:18:42.305481] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:18.219 [2024-11-20 07:18:42.305945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.219 [2024-11-20 07:18:42.503011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.477 [2024-11-20 07:18:42.658908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.735 [2024-11-20 07:18:42.870955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.735 [2024-11-20 07:18:42.871007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.304 [2024-11-20 07:18:43.369107] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:19.304 [2024-11-20 07:18:43.369188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:19.304 [2024-11-20 07:18:43.369205] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:19.304 [2024-11-20 07:18:43.369220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.304 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.304 "name": "Existed_Raid", 00:21:19.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.304 "strip_size_kb": 64, 00:21:19.304 "state": "configuring", 00:21:19.304 "raid_level": "concat", 00:21:19.304 "superblock": false, 00:21:19.304 "num_base_bdevs": 2, 00:21:19.304 "num_base_bdevs_discovered": 0, 00:21:19.304 "num_base_bdevs_operational": 2, 00:21:19.304 "base_bdevs_list": [ 00:21:19.304 { 00:21:19.304 "name": "BaseBdev1", 00:21:19.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.304 "is_configured": false, 00:21:19.304 "data_offset": 0, 00:21:19.304 "data_size": 0 00:21:19.304 }, 00:21:19.304 { 00:21:19.304 "name": "BaseBdev2", 00:21:19.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.304 "is_configured": false, 00:21:19.305 "data_offset": 0, 00:21:19.305 "data_size": 0 00:21:19.305 } 00:21:19.305 ] 00:21:19.305 }' 00:21:19.305 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.305 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.874 [2024-11-20 07:18:43.893187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:19.874 [2024-11-20 07:18:43.893230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.874 [2024-11-20 07:18:43.905241] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:19.874 [2024-11-20 07:18:43.905318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:19.874 [2024-11-20 07:18:43.905349] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:19.874 [2024-11-20 07:18:43.905368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.874 [2024-11-20 07:18:43.952065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:19.874 BaseBdev1 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:19.874 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.875 [ 00:21:19.875 { 00:21:19.875 "name": "BaseBdev1", 00:21:19.875 "aliases": [ 00:21:19.875 "220af185-a64c-4d66-bf76-b1b913dd196e" 00:21:19.875 ], 00:21:19.875 "product_name": "Malloc disk", 00:21:19.875 "block_size": 512, 00:21:19.875 "num_blocks": 65536, 00:21:19.875 "uuid": "220af185-a64c-4d66-bf76-b1b913dd196e", 00:21:19.875 "assigned_rate_limits": { 00:21:19.875 "rw_ios_per_sec": 0, 00:21:19.875 "rw_mbytes_per_sec": 0, 00:21:19.875 "r_mbytes_per_sec": 0, 00:21:19.875 "w_mbytes_per_sec": 0 00:21:19.875 }, 00:21:19.875 "claimed": true, 00:21:19.875 "claim_type": "exclusive_write", 00:21:19.875 "zoned": false, 00:21:19.875 "supported_io_types": { 00:21:19.875 "read": true, 00:21:19.875 "write": true, 00:21:19.875 "unmap": true, 00:21:19.875 "flush": true, 00:21:19.875 "reset": true, 00:21:19.875 "nvme_admin": false, 00:21:19.875 "nvme_io": false, 00:21:19.875 "nvme_io_md": false, 00:21:19.875 "write_zeroes": true, 00:21:19.875 "zcopy": true, 00:21:19.875 "get_zone_info": false, 00:21:19.875 "zone_management": false, 00:21:19.875 "zone_append": false, 00:21:19.875 "compare": false, 00:21:19.875 "compare_and_write": false, 00:21:19.875 "abort": true, 00:21:19.875 "seek_hole": false, 00:21:19.875 "seek_data": false, 00:21:19.875 "copy": true, 00:21:19.875 "nvme_iov_md": false 00:21:19.875 }, 00:21:19.875 "memory_domains": [ 00:21:19.875 { 00:21:19.875 "dma_device_id": "system", 00:21:19.875 "dma_device_type": 1 00:21:19.875 }, 00:21:19.875 { 00:21:19.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.875 "dma_device_type": 2 00:21:19.875 } 00:21:19.875 ], 00:21:19.875 "driver_specific": {} 00:21:19.875 } 00:21:19.875 ] 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.875 07:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.875 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.875 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.875 "name": "Existed_Raid", 00:21:19.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.875 "strip_size_kb": 64, 00:21:19.875 "state": "configuring", 00:21:19.875 "raid_level": "concat", 00:21:19.875 "superblock": false, 00:21:19.875 "num_base_bdevs": 2, 00:21:19.875 "num_base_bdevs_discovered": 1, 00:21:19.875 "num_base_bdevs_operational": 2, 00:21:19.875 "base_bdevs_list": [ 00:21:19.875 { 00:21:19.875 "name": "BaseBdev1", 00:21:19.875 "uuid": "220af185-a64c-4d66-bf76-b1b913dd196e", 00:21:19.875 "is_configured": true, 00:21:19.875 "data_offset": 0, 00:21:19.875 "data_size": 65536 00:21:19.875 }, 00:21:19.875 { 00:21:19.875 "name": "BaseBdev2", 00:21:19.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.875 "is_configured": false, 00:21:19.875 "data_offset": 0, 00:21:19.875 "data_size": 0 00:21:19.875 } 00:21:19.875 ] 00:21:19.875 }' 00:21:19.875 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.875 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.443 [2024-11-20 07:18:44.504318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:20.443 [2024-11-20 07:18:44.504530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.443 [2024-11-20 07:18:44.512372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.443 [2024-11-20 07:18:44.514944] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:20.443 [2024-11-20 07:18:44.515161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.443 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.443 "name": "Existed_Raid", 00:21:20.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.443 "strip_size_kb": 64, 00:21:20.443 "state": "configuring", 00:21:20.443 "raid_level": "concat", 00:21:20.443 "superblock": false, 00:21:20.443 "num_base_bdevs": 2, 00:21:20.443 "num_base_bdevs_discovered": 1, 00:21:20.443 "num_base_bdevs_operational": 2, 00:21:20.444 "base_bdevs_list": [ 00:21:20.444 { 00:21:20.444 "name": "BaseBdev1", 00:21:20.444 "uuid": "220af185-a64c-4d66-bf76-b1b913dd196e", 00:21:20.444 "is_configured": true, 00:21:20.444 "data_offset": 0, 00:21:20.444 "data_size": 65536 00:21:20.444 }, 00:21:20.444 { 00:21:20.444 "name": "BaseBdev2", 00:21:20.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.444 "is_configured": false, 00:21:20.444 "data_offset": 0, 00:21:20.444 "data_size": 0 00:21:20.444 } 00:21:20.444 ] 00:21:20.444 }' 00:21:20.444 07:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.444 07:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 [2024-11-20 07:18:45.100898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.012 [2024-11-20 07:18:45.100967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:21.012 [2024-11-20 07:18:45.100981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:21.012 [2024-11-20 07:18:45.101314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:21.012 [2024-11-20 07:18:45.101519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:21.012 [2024-11-20 07:18:45.101542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:21.012 [2024-11-20 07:18:45.101892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.012 BaseBdev2 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.012 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 [ 00:21:21.013 { 00:21:21.013 "name": "BaseBdev2", 00:21:21.013 "aliases": [ 00:21:21.013 "33bc5210-66d5-432b-ad97-ada8c590a77d" 00:21:21.013 ], 00:21:21.013 "product_name": "Malloc disk", 00:21:21.013 "block_size": 512, 00:21:21.013 "num_blocks": 65536, 00:21:21.013 "uuid": "33bc5210-66d5-432b-ad97-ada8c590a77d", 00:21:21.013 "assigned_rate_limits": { 00:21:21.013 "rw_ios_per_sec": 0, 00:21:21.013 "rw_mbytes_per_sec": 0, 00:21:21.013 "r_mbytes_per_sec": 0, 00:21:21.013 "w_mbytes_per_sec": 0 00:21:21.013 }, 00:21:21.013 "claimed": true, 00:21:21.013 "claim_type": "exclusive_write", 00:21:21.013 "zoned": false, 00:21:21.013 "supported_io_types": { 00:21:21.013 "read": true, 00:21:21.013 "write": true, 00:21:21.013 "unmap": true, 00:21:21.013 "flush": true, 00:21:21.013 "reset": true, 00:21:21.013 "nvme_admin": false, 00:21:21.013 "nvme_io": false, 00:21:21.013 "nvme_io_md": false, 00:21:21.013 "write_zeroes": true, 00:21:21.013 "zcopy": true, 00:21:21.013 "get_zone_info": false, 00:21:21.013 "zone_management": false, 00:21:21.013 "zone_append": false, 00:21:21.013 "compare": false, 00:21:21.013 "compare_and_write": false, 00:21:21.013 "abort": true, 00:21:21.013 "seek_hole": false, 00:21:21.013 "seek_data": false, 00:21:21.013 "copy": true, 00:21:21.013 "nvme_iov_md": false 00:21:21.013 }, 00:21:21.013 "memory_domains": [ 00:21:21.013 { 00:21:21.013 "dma_device_id": "system", 00:21:21.013 "dma_device_type": 1 00:21:21.013 }, 00:21:21.013 { 00:21:21.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.013 "dma_device_type": 2 00:21:21.013 } 00:21:21.013 ], 00:21:21.013 "driver_specific": {} 00:21:21.013 } 00:21:21.013 ] 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.013 "name": "Existed_Raid", 00:21:21.013 "uuid": "03f9aa68-ab09-409f-8b9e-7a777194dabf", 00:21:21.013 "strip_size_kb": 64, 00:21:21.013 "state": "online", 00:21:21.013 "raid_level": "concat", 00:21:21.013 "superblock": false, 00:21:21.013 "num_base_bdevs": 2, 00:21:21.013 "num_base_bdevs_discovered": 2, 00:21:21.013 "num_base_bdevs_operational": 2, 00:21:21.013 "base_bdevs_list": [ 00:21:21.013 { 00:21:21.013 "name": "BaseBdev1", 00:21:21.013 "uuid": "220af185-a64c-4d66-bf76-b1b913dd196e", 00:21:21.013 "is_configured": true, 00:21:21.013 "data_offset": 0, 00:21:21.013 "data_size": 65536 00:21:21.013 }, 00:21:21.013 { 00:21:21.013 "name": "BaseBdev2", 00:21:21.013 "uuid": "33bc5210-66d5-432b-ad97-ada8c590a77d", 00:21:21.013 "is_configured": true, 00:21:21.013 "data_offset": 0, 00:21:21.013 "data_size": 65536 00:21:21.013 } 00:21:21.013 ] 00:21:21.013 }' 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.013 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.582 [2024-11-20 07:18:45.657554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:21.582 "name": "Existed_Raid", 00:21:21.582 "aliases": [ 00:21:21.582 "03f9aa68-ab09-409f-8b9e-7a777194dabf" 00:21:21.582 ], 00:21:21.582 "product_name": "Raid Volume", 00:21:21.582 "block_size": 512, 00:21:21.582 "num_blocks": 131072, 00:21:21.582 "uuid": "03f9aa68-ab09-409f-8b9e-7a777194dabf", 00:21:21.582 "assigned_rate_limits": { 00:21:21.582 "rw_ios_per_sec": 0, 00:21:21.582 "rw_mbytes_per_sec": 0, 00:21:21.582 "r_mbytes_per_sec": 0, 00:21:21.582 "w_mbytes_per_sec": 0 00:21:21.582 }, 00:21:21.582 "claimed": false, 00:21:21.582 "zoned": false, 00:21:21.582 "supported_io_types": { 00:21:21.582 "read": true, 00:21:21.582 "write": true, 00:21:21.582 "unmap": true, 00:21:21.582 "flush": true, 00:21:21.582 "reset": true, 00:21:21.582 "nvme_admin": false, 00:21:21.582 "nvme_io": false, 00:21:21.582 "nvme_io_md": false, 00:21:21.582 "write_zeroes": true, 00:21:21.582 "zcopy": false, 00:21:21.582 "get_zone_info": false, 00:21:21.582 "zone_management": false, 00:21:21.582 "zone_append": false, 00:21:21.582 "compare": false, 00:21:21.582 "compare_and_write": false, 00:21:21.582 "abort": false, 00:21:21.582 "seek_hole": false, 00:21:21.582 "seek_data": false, 00:21:21.582 "copy": false, 00:21:21.582 "nvme_iov_md": false 00:21:21.582 }, 00:21:21.582 "memory_domains": [ 00:21:21.582 { 00:21:21.582 "dma_device_id": "system", 00:21:21.582 "dma_device_type": 1 00:21:21.582 }, 00:21:21.582 { 00:21:21.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.582 "dma_device_type": 2 00:21:21.582 }, 00:21:21.582 { 00:21:21.582 "dma_device_id": "system", 00:21:21.582 "dma_device_type": 1 00:21:21.582 }, 00:21:21.582 { 00:21:21.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.582 "dma_device_type": 2 00:21:21.582 } 00:21:21.582 ], 00:21:21.582 "driver_specific": { 00:21:21.582 "raid": { 00:21:21.582 "uuid": "03f9aa68-ab09-409f-8b9e-7a777194dabf", 00:21:21.582 "strip_size_kb": 64, 00:21:21.582 "state": "online", 00:21:21.582 "raid_level": "concat", 00:21:21.582 "superblock": false, 00:21:21.582 "num_base_bdevs": 2, 00:21:21.582 "num_base_bdevs_discovered": 2, 00:21:21.582 "num_base_bdevs_operational": 2, 00:21:21.582 "base_bdevs_list": [ 00:21:21.582 { 00:21:21.582 "name": "BaseBdev1", 00:21:21.582 "uuid": "220af185-a64c-4d66-bf76-b1b913dd196e", 00:21:21.582 "is_configured": true, 00:21:21.582 "data_offset": 0, 00:21:21.582 "data_size": 65536 00:21:21.582 }, 00:21:21.582 { 00:21:21.582 "name": "BaseBdev2", 00:21:21.582 "uuid": "33bc5210-66d5-432b-ad97-ada8c590a77d", 00:21:21.582 "is_configured": true, 00:21:21.582 "data_offset": 0, 00:21:21.582 "data_size": 65536 00:21:21.582 } 00:21:21.582 ] 00:21:21.582 } 00:21:21.582 } 00:21:21.582 }' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:21.582 BaseBdev2' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.582 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.841 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.841 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.841 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.841 07:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:21.841 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.841 07:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.841 [2024-11-20 07:18:45.917351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.841 [2024-11-20 07:18:45.917395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.841 [2024-11-20 07:18:45.917462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.841 "name": "Existed_Raid", 00:21:21.841 "uuid": "03f9aa68-ab09-409f-8b9e-7a777194dabf", 00:21:21.841 "strip_size_kb": 64, 00:21:21.841 "state": "offline", 00:21:21.841 "raid_level": "concat", 00:21:21.841 "superblock": false, 00:21:21.841 "num_base_bdevs": 2, 00:21:21.841 "num_base_bdevs_discovered": 1, 00:21:21.841 "num_base_bdevs_operational": 1, 00:21:21.841 "base_bdevs_list": [ 00:21:21.841 { 00:21:21.841 "name": null, 00:21:21.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.841 "is_configured": false, 00:21:21.841 "data_offset": 0, 00:21:21.841 "data_size": 65536 00:21:21.841 }, 00:21:21.841 { 00:21:21.841 "name": "BaseBdev2", 00:21:21.841 "uuid": "33bc5210-66d5-432b-ad97-ada8c590a77d", 00:21:21.841 "is_configured": true, 00:21:21.841 "data_offset": 0, 00:21:21.841 "data_size": 65536 00:21:21.841 } 00:21:21.841 ] 00:21:21.841 }' 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.841 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 [2024-11-20 07:18:46.585806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:22.410 [2024-11-20 07:18:46.585881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61862 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61862 ']' 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61862 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61862 00:21:22.669 killing process with pid 61862 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61862' 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61862 00:21:22.669 [2024-11-20 07:18:46.764738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.669 07:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61862 00:21:22.669 [2024-11-20 07:18:46.779666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:23.606 07:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:23.606 ************************************ 00:21:23.606 END TEST raid_state_function_test 00:21:23.606 ************************************ 00:21:23.606 00:21:23.606 real 0m5.654s 00:21:23.606 user 0m8.551s 00:21:23.606 sys 0m0.812s 00:21:23.606 07:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.606 07:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 07:18:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:21:23.606 07:18:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:23.606 07:18:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.606 07:18:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:23.865 ************************************ 00:21:23.865 START TEST raid_state_function_test_sb 00:21:23.865 ************************************ 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:23.865 Process raid pid: 62122 00:21:23.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62122 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62122' 00:21:23.865 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62122 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62122 ']' 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.866 07:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.866 [2024-11-20 07:18:48.012555] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:23.866 [2024-11-20 07:18:48.012759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.124 [2024-11-20 07:18:48.192331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.124 [2024-11-20 07:18:48.328280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.384 [2024-11-20 07:18:48.541184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.384 [2024-11-20 07:18:48.541218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.951 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.951 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.952 [2024-11-20 07:18:49.010198] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:24.952 [2024-11-20 07:18:49.010285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:24.952 [2024-11-20 07:18:49.010312] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:24.952 [2024-11-20 07:18:49.010339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.952 "name": "Existed_Raid", 00:21:24.952 "uuid": "2e91a3f1-ecf6-43a2-8c41-d1f371ede174", 00:21:24.952 "strip_size_kb": 64, 00:21:24.952 "state": "configuring", 00:21:24.952 "raid_level": "concat", 00:21:24.952 "superblock": true, 00:21:24.952 "num_base_bdevs": 2, 00:21:24.952 "num_base_bdevs_discovered": 0, 00:21:24.952 "num_base_bdevs_operational": 2, 00:21:24.952 "base_bdevs_list": [ 00:21:24.952 { 00:21:24.952 "name": "BaseBdev1", 00:21:24.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.952 "is_configured": false, 00:21:24.952 "data_offset": 0, 00:21:24.952 "data_size": 0 00:21:24.952 }, 00:21:24.952 { 00:21:24.952 "name": "BaseBdev2", 00:21:24.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.952 "is_configured": false, 00:21:24.952 "data_offset": 0, 00:21:24.952 "data_size": 0 00:21:24.952 } 00:21:24.952 ] 00:21:24.952 }' 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.952 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 [2024-11-20 07:18:49.590258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:25.520 [2024-11-20 07:18:49.590483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 [2024-11-20 07:18:49.598255] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.520 [2024-11-20 07:18:49.598318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.520 [2024-11-20 07:18:49.598334] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.520 [2024-11-20 07:18:49.598351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 [2024-11-20 07:18:49.644428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.520 BaseBdev1 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.520 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 [ 00:21:25.520 { 00:21:25.520 "name": "BaseBdev1", 00:21:25.520 "aliases": [ 00:21:25.520 "9e4d94c1-0da9-4416-b41d-6562ed01c515" 00:21:25.520 ], 00:21:25.520 "product_name": "Malloc disk", 00:21:25.520 "block_size": 512, 00:21:25.520 "num_blocks": 65536, 00:21:25.521 "uuid": "9e4d94c1-0da9-4416-b41d-6562ed01c515", 00:21:25.521 "assigned_rate_limits": { 00:21:25.521 "rw_ios_per_sec": 0, 00:21:25.521 "rw_mbytes_per_sec": 0, 00:21:25.521 "r_mbytes_per_sec": 0, 00:21:25.521 "w_mbytes_per_sec": 0 00:21:25.521 }, 00:21:25.521 "claimed": true, 00:21:25.521 "claim_type": "exclusive_write", 00:21:25.521 "zoned": false, 00:21:25.521 "supported_io_types": { 00:21:25.521 "read": true, 00:21:25.521 "write": true, 00:21:25.521 "unmap": true, 00:21:25.521 "flush": true, 00:21:25.521 "reset": true, 00:21:25.521 "nvme_admin": false, 00:21:25.521 "nvme_io": false, 00:21:25.521 "nvme_io_md": false, 00:21:25.521 "write_zeroes": true, 00:21:25.521 "zcopy": true, 00:21:25.521 "get_zone_info": false, 00:21:25.521 "zone_management": false, 00:21:25.521 "zone_append": false, 00:21:25.521 "compare": false, 00:21:25.521 "compare_and_write": false, 00:21:25.521 "abort": true, 00:21:25.521 "seek_hole": false, 00:21:25.521 "seek_data": false, 00:21:25.521 "copy": true, 00:21:25.521 "nvme_iov_md": false 00:21:25.521 }, 00:21:25.521 "memory_domains": [ 00:21:25.521 { 00:21:25.521 "dma_device_id": "system", 00:21:25.521 "dma_device_type": 1 00:21:25.521 }, 00:21:25.521 { 00:21:25.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.521 "dma_device_type": 2 00:21:25.521 } 00:21:25.521 ], 00:21:25.521 "driver_specific": {} 00:21:25.521 } 00:21:25.521 ] 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.521 "name": "Existed_Raid", 00:21:25.521 "uuid": "b4404817-ed6c-4b06-9d31-bd22db2dad09", 00:21:25.521 "strip_size_kb": 64, 00:21:25.521 "state": "configuring", 00:21:25.521 "raid_level": "concat", 00:21:25.521 "superblock": true, 00:21:25.521 "num_base_bdevs": 2, 00:21:25.521 "num_base_bdevs_discovered": 1, 00:21:25.521 "num_base_bdevs_operational": 2, 00:21:25.521 "base_bdevs_list": [ 00:21:25.521 { 00:21:25.521 "name": "BaseBdev1", 00:21:25.521 "uuid": "9e4d94c1-0da9-4416-b41d-6562ed01c515", 00:21:25.521 "is_configured": true, 00:21:25.521 "data_offset": 2048, 00:21:25.521 "data_size": 63488 00:21:25.521 }, 00:21:25.521 { 00:21:25.521 "name": "BaseBdev2", 00:21:25.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.521 "is_configured": false, 00:21:25.521 "data_offset": 0, 00:21:25.521 "data_size": 0 00:21:25.521 } 00:21:25.521 ] 00:21:25.521 }' 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.521 07:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.089 [2024-11-20 07:18:50.220676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.089 [2024-11-20 07:18:50.220739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.089 [2024-11-20 07:18:50.232769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.089 [2024-11-20 07:18:50.235379] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.089 [2024-11-20 07:18:50.235461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.089 "name": "Existed_Raid", 00:21:26.089 "uuid": "ac232b0e-92cc-46b6-b53d-638180928168", 00:21:26.089 "strip_size_kb": 64, 00:21:26.089 "state": "configuring", 00:21:26.089 "raid_level": "concat", 00:21:26.089 "superblock": true, 00:21:26.089 "num_base_bdevs": 2, 00:21:26.089 "num_base_bdevs_discovered": 1, 00:21:26.089 "num_base_bdevs_operational": 2, 00:21:26.089 "base_bdevs_list": [ 00:21:26.089 { 00:21:26.089 "name": "BaseBdev1", 00:21:26.089 "uuid": "9e4d94c1-0da9-4416-b41d-6562ed01c515", 00:21:26.089 "is_configured": true, 00:21:26.089 "data_offset": 2048, 00:21:26.089 "data_size": 63488 00:21:26.089 }, 00:21:26.089 { 00:21:26.089 "name": "BaseBdev2", 00:21:26.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.089 "is_configured": false, 00:21:26.089 "data_offset": 0, 00:21:26.089 "data_size": 0 00:21:26.089 } 00:21:26.089 ] 00:21:26.089 }' 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.089 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.656 [2024-11-20 07:18:50.796567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.656 [2024-11-20 07:18:50.796891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:26.656 [2024-11-20 07:18:50.796912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:26.656 BaseBdev2 00:21:26.656 [2024-11-20 07:18:50.797266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:26.656 [2024-11-20 07:18:50.797456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:26.656 [2024-11-20 07:18:50.797484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:26.656 [2024-11-20 07:18:50.797680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.656 [ 00:21:26.656 { 00:21:26.656 "name": "BaseBdev2", 00:21:26.656 "aliases": [ 00:21:26.656 "0bfa27d5-ca92-4ee3-b5d6-a7796fcbd4dd" 00:21:26.656 ], 00:21:26.656 "product_name": "Malloc disk", 00:21:26.656 "block_size": 512, 00:21:26.656 "num_blocks": 65536, 00:21:26.656 "uuid": "0bfa27d5-ca92-4ee3-b5d6-a7796fcbd4dd", 00:21:26.656 "assigned_rate_limits": { 00:21:26.656 "rw_ios_per_sec": 0, 00:21:26.656 "rw_mbytes_per_sec": 0, 00:21:26.656 "r_mbytes_per_sec": 0, 00:21:26.656 "w_mbytes_per_sec": 0 00:21:26.656 }, 00:21:26.656 "claimed": true, 00:21:26.656 "claim_type": "exclusive_write", 00:21:26.656 "zoned": false, 00:21:26.656 "supported_io_types": { 00:21:26.656 "read": true, 00:21:26.656 "write": true, 00:21:26.656 "unmap": true, 00:21:26.656 "flush": true, 00:21:26.656 "reset": true, 00:21:26.656 "nvme_admin": false, 00:21:26.656 "nvme_io": false, 00:21:26.656 "nvme_io_md": false, 00:21:26.656 "write_zeroes": true, 00:21:26.656 "zcopy": true, 00:21:26.656 "get_zone_info": false, 00:21:26.656 "zone_management": false, 00:21:26.656 "zone_append": false, 00:21:26.656 "compare": false, 00:21:26.656 "compare_and_write": false, 00:21:26.656 "abort": true, 00:21:26.656 "seek_hole": false, 00:21:26.656 "seek_data": false, 00:21:26.656 "copy": true, 00:21:26.656 "nvme_iov_md": false 00:21:26.656 }, 00:21:26.656 "memory_domains": [ 00:21:26.656 { 00:21:26.656 "dma_device_id": "system", 00:21:26.656 "dma_device_type": 1 00:21:26.656 }, 00:21:26.656 { 00:21:26.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.656 "dma_device_type": 2 00:21:26.656 } 00:21:26.656 ], 00:21:26.656 "driver_specific": {} 00:21:26.656 } 00:21:26.656 ] 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:21:26.656 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.657 "name": "Existed_Raid", 00:21:26.657 "uuid": "ac232b0e-92cc-46b6-b53d-638180928168", 00:21:26.657 "strip_size_kb": 64, 00:21:26.657 "state": "online", 00:21:26.657 "raid_level": "concat", 00:21:26.657 "superblock": true, 00:21:26.657 "num_base_bdevs": 2, 00:21:26.657 "num_base_bdevs_discovered": 2, 00:21:26.657 "num_base_bdevs_operational": 2, 00:21:26.657 "base_bdevs_list": [ 00:21:26.657 { 00:21:26.657 "name": "BaseBdev1", 00:21:26.657 "uuid": "9e4d94c1-0da9-4416-b41d-6562ed01c515", 00:21:26.657 "is_configured": true, 00:21:26.657 "data_offset": 2048, 00:21:26.657 "data_size": 63488 00:21:26.657 }, 00:21:26.657 { 00:21:26.657 "name": "BaseBdev2", 00:21:26.657 "uuid": "0bfa27d5-ca92-4ee3-b5d6-a7796fcbd4dd", 00:21:26.657 "is_configured": true, 00:21:26.657 "data_offset": 2048, 00:21:26.657 "data_size": 63488 00:21:26.657 } 00:21:26.657 ] 00:21:26.657 }' 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.657 07:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.225 [2024-11-20 07:18:51.361248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:27.225 "name": "Existed_Raid", 00:21:27.225 "aliases": [ 00:21:27.225 "ac232b0e-92cc-46b6-b53d-638180928168" 00:21:27.225 ], 00:21:27.225 "product_name": "Raid Volume", 00:21:27.225 "block_size": 512, 00:21:27.225 "num_blocks": 126976, 00:21:27.225 "uuid": "ac232b0e-92cc-46b6-b53d-638180928168", 00:21:27.225 "assigned_rate_limits": { 00:21:27.225 "rw_ios_per_sec": 0, 00:21:27.225 "rw_mbytes_per_sec": 0, 00:21:27.225 "r_mbytes_per_sec": 0, 00:21:27.225 "w_mbytes_per_sec": 0 00:21:27.225 }, 00:21:27.225 "claimed": false, 00:21:27.225 "zoned": false, 00:21:27.225 "supported_io_types": { 00:21:27.225 "read": true, 00:21:27.225 "write": true, 00:21:27.225 "unmap": true, 00:21:27.225 "flush": true, 00:21:27.225 "reset": true, 00:21:27.225 "nvme_admin": false, 00:21:27.225 "nvme_io": false, 00:21:27.225 "nvme_io_md": false, 00:21:27.225 "write_zeroes": true, 00:21:27.225 "zcopy": false, 00:21:27.225 "get_zone_info": false, 00:21:27.225 "zone_management": false, 00:21:27.225 "zone_append": false, 00:21:27.225 "compare": false, 00:21:27.225 "compare_and_write": false, 00:21:27.225 "abort": false, 00:21:27.225 "seek_hole": false, 00:21:27.225 "seek_data": false, 00:21:27.225 "copy": false, 00:21:27.225 "nvme_iov_md": false 00:21:27.225 }, 00:21:27.225 "memory_domains": [ 00:21:27.225 { 00:21:27.225 "dma_device_id": "system", 00:21:27.225 "dma_device_type": 1 00:21:27.225 }, 00:21:27.225 { 00:21:27.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.225 "dma_device_type": 2 00:21:27.225 }, 00:21:27.225 { 00:21:27.225 "dma_device_id": "system", 00:21:27.225 "dma_device_type": 1 00:21:27.225 }, 00:21:27.225 { 00:21:27.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.225 "dma_device_type": 2 00:21:27.225 } 00:21:27.225 ], 00:21:27.225 "driver_specific": { 00:21:27.225 "raid": { 00:21:27.225 "uuid": "ac232b0e-92cc-46b6-b53d-638180928168", 00:21:27.225 "strip_size_kb": 64, 00:21:27.225 "state": "online", 00:21:27.225 "raid_level": "concat", 00:21:27.225 "superblock": true, 00:21:27.225 "num_base_bdevs": 2, 00:21:27.225 "num_base_bdevs_discovered": 2, 00:21:27.225 "num_base_bdevs_operational": 2, 00:21:27.225 "base_bdevs_list": [ 00:21:27.225 { 00:21:27.225 "name": "BaseBdev1", 00:21:27.225 "uuid": "9e4d94c1-0da9-4416-b41d-6562ed01c515", 00:21:27.225 "is_configured": true, 00:21:27.225 "data_offset": 2048, 00:21:27.225 "data_size": 63488 00:21:27.225 }, 00:21:27.225 { 00:21:27.225 "name": "BaseBdev2", 00:21:27.225 "uuid": "0bfa27d5-ca92-4ee3-b5d6-a7796fcbd4dd", 00:21:27.225 "is_configured": true, 00:21:27.225 "data_offset": 2048, 00:21:27.225 "data_size": 63488 00:21:27.225 } 00:21:27.225 ] 00:21:27.225 } 00:21:27.225 } 00:21:27.225 }' 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:27.225 BaseBdev2' 00:21:27.225 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.485 [2024-11-20 07:18:51.629034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:27.485 [2024-11-20 07:18:51.629080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.485 [2024-11-20 07:18:51.629161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.485 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.743 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.743 "name": "Existed_Raid", 00:21:27.743 "uuid": "ac232b0e-92cc-46b6-b53d-638180928168", 00:21:27.743 "strip_size_kb": 64, 00:21:27.743 "state": "offline", 00:21:27.743 "raid_level": "concat", 00:21:27.743 "superblock": true, 00:21:27.743 "num_base_bdevs": 2, 00:21:27.743 "num_base_bdevs_discovered": 1, 00:21:27.743 "num_base_bdevs_operational": 1, 00:21:27.743 "base_bdevs_list": [ 00:21:27.743 { 00:21:27.743 "name": null, 00:21:27.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.743 "is_configured": false, 00:21:27.743 "data_offset": 0, 00:21:27.743 "data_size": 63488 00:21:27.743 }, 00:21:27.743 { 00:21:27.743 "name": "BaseBdev2", 00:21:27.743 "uuid": "0bfa27d5-ca92-4ee3-b5d6-a7796fcbd4dd", 00:21:27.743 "is_configured": true, 00:21:27.743 "data_offset": 2048, 00:21:27.743 "data_size": 63488 00:21:27.743 } 00:21:27.743 ] 00:21:27.743 }' 00:21:27.743 07:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.743 07:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.002 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.261 [2024-11-20 07:18:52.294247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:28.261 [2024-11-20 07:18:52.294325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62122 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62122 ']' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62122 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62122 00:21:28.261 killing process with pid 62122 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62122' 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62122 00:21:28.261 [2024-11-20 07:18:52.478468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.261 07:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62122 00:21:28.261 [2024-11-20 07:18:52.493980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.638 07:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:29.638 ************************************ 00:21:29.638 END TEST raid_state_function_test_sb 00:21:29.638 ************************************ 00:21:29.638 00:21:29.638 real 0m5.656s 00:21:29.638 user 0m8.587s 00:21:29.638 sys 0m0.792s 00:21:29.638 07:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.638 07:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.638 07:18:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:21:29.638 07:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:29.638 07:18:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.638 07:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.638 ************************************ 00:21:29.639 START TEST raid_superblock_test 00:21:29.639 ************************************ 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62374 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62374 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62374 ']' 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.639 07:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.639 [2024-11-20 07:18:53.731755] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:29.639 [2024-11-20 07:18:53.731956] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62374 ] 00:21:29.639 [2024-11-20 07:18:53.919027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.898 [2024-11-20 07:18:54.057045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.156 [2024-11-20 07:18:54.264103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.156 [2024-11-20 07:18:54.264187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.415 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.415 malloc1 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.674 [2024-11-20 07:18:54.710653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:30.674 [2024-11-20 07:18:54.710866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.674 [2024-11-20 07:18:54.711031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:30.674 [2024-11-20 07:18:54.711161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.674 [2024-11-20 07:18:54.714438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.674 [2024-11-20 07:18:54.714670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:30.674 pt1 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.674 malloc2 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.674 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.674 [2024-11-20 07:18:54.767964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:30.674 [2024-11-20 07:18:54.768170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.674 [2024-11-20 07:18:54.768213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:30.674 [2024-11-20 07:18:54.768229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.674 [2024-11-20 07:18:54.771107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.674 [2024-11-20 07:18:54.771155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:30.674 pt2 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.675 [2024-11-20 07:18:54.780104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:30.675 [2024-11-20 07:18:54.782635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:30.675 [2024-11-20 07:18:54.782846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:30.675 [2024-11-20 07:18:54.782865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:30.675 [2024-11-20 07:18:54.783184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:30.675 [2024-11-20 07:18:54.783399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:30.675 [2024-11-20 07:18:54.783422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:30.675 [2024-11-20 07:18:54.783632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.675 "name": "raid_bdev1", 00:21:30.675 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:30.675 "strip_size_kb": 64, 00:21:30.675 "state": "online", 00:21:30.675 "raid_level": "concat", 00:21:30.675 "superblock": true, 00:21:30.675 "num_base_bdevs": 2, 00:21:30.675 "num_base_bdevs_discovered": 2, 00:21:30.675 "num_base_bdevs_operational": 2, 00:21:30.675 "base_bdevs_list": [ 00:21:30.675 { 00:21:30.675 "name": "pt1", 00:21:30.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:30.675 "is_configured": true, 00:21:30.675 "data_offset": 2048, 00:21:30.675 "data_size": 63488 00:21:30.675 }, 00:21:30.675 { 00:21:30.675 "name": "pt2", 00:21:30.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:30.675 "is_configured": true, 00:21:30.675 "data_offset": 2048, 00:21:30.675 "data_size": 63488 00:21:30.675 } 00:21:30.675 ] 00:21:30.675 }' 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.675 07:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.241 [2024-11-20 07:18:55.304622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.241 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:31.241 "name": "raid_bdev1", 00:21:31.241 "aliases": [ 00:21:31.241 "308ba973-28af-4a09-bfd0-3c9b43424f58" 00:21:31.241 ], 00:21:31.241 "product_name": "Raid Volume", 00:21:31.241 "block_size": 512, 00:21:31.241 "num_blocks": 126976, 00:21:31.241 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:31.241 "assigned_rate_limits": { 00:21:31.241 "rw_ios_per_sec": 0, 00:21:31.241 "rw_mbytes_per_sec": 0, 00:21:31.241 "r_mbytes_per_sec": 0, 00:21:31.241 "w_mbytes_per_sec": 0 00:21:31.241 }, 00:21:31.241 "claimed": false, 00:21:31.241 "zoned": false, 00:21:31.241 "supported_io_types": { 00:21:31.241 "read": true, 00:21:31.241 "write": true, 00:21:31.241 "unmap": true, 00:21:31.241 "flush": true, 00:21:31.241 "reset": true, 00:21:31.241 "nvme_admin": false, 00:21:31.242 "nvme_io": false, 00:21:31.242 "nvme_io_md": false, 00:21:31.242 "write_zeroes": true, 00:21:31.242 "zcopy": false, 00:21:31.242 "get_zone_info": false, 00:21:31.242 "zone_management": false, 00:21:31.242 "zone_append": false, 00:21:31.242 "compare": false, 00:21:31.242 "compare_and_write": false, 00:21:31.242 "abort": false, 00:21:31.242 "seek_hole": false, 00:21:31.242 "seek_data": false, 00:21:31.242 "copy": false, 00:21:31.242 "nvme_iov_md": false 00:21:31.242 }, 00:21:31.242 "memory_domains": [ 00:21:31.242 { 00:21:31.242 "dma_device_id": "system", 00:21:31.242 "dma_device_type": 1 00:21:31.242 }, 00:21:31.242 { 00:21:31.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.242 "dma_device_type": 2 00:21:31.242 }, 00:21:31.242 { 00:21:31.242 "dma_device_id": "system", 00:21:31.242 "dma_device_type": 1 00:21:31.242 }, 00:21:31.242 { 00:21:31.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.242 "dma_device_type": 2 00:21:31.242 } 00:21:31.242 ], 00:21:31.242 "driver_specific": { 00:21:31.242 "raid": { 00:21:31.242 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:31.242 "strip_size_kb": 64, 00:21:31.242 "state": "online", 00:21:31.242 "raid_level": "concat", 00:21:31.242 "superblock": true, 00:21:31.242 "num_base_bdevs": 2, 00:21:31.242 "num_base_bdevs_discovered": 2, 00:21:31.242 "num_base_bdevs_operational": 2, 00:21:31.242 "base_bdevs_list": [ 00:21:31.242 { 00:21:31.242 "name": "pt1", 00:21:31.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:31.242 "is_configured": true, 00:21:31.242 "data_offset": 2048, 00:21:31.242 "data_size": 63488 00:21:31.242 }, 00:21:31.242 { 00:21:31.242 "name": "pt2", 00:21:31.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:31.242 "is_configured": true, 00:21:31.242 "data_offset": 2048, 00:21:31.242 "data_size": 63488 00:21:31.242 } 00:21:31.242 ] 00:21:31.242 } 00:21:31.242 } 00:21:31.242 }' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:31.242 pt2' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.242 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:31.501 [2024-11-20 07:18:55.572714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=308ba973-28af-4a09-bfd0-3c9b43424f58 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 308ba973-28af-4a09-bfd0-3c9b43424f58 ']' 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.501 [2024-11-20 07:18:55.620335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:31.501 [2024-11-20 07:18:55.620368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.501 [2024-11-20 07:18:55.620466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.501 [2024-11-20 07:18:55.620554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.501 [2024-11-20 07:18:55.620585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.501 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.502 [2024-11-20 07:18:55.756383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:31.502 [2024-11-20 07:18:55.758924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:31.502 [2024-11-20 07:18:55.759212] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:31.502 [2024-11-20 07:18:55.759291] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:31.502 [2024-11-20 07:18:55.759317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:31.502 [2024-11-20 07:18:55.759332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:31.502 request: 00:21:31.502 { 00:21:31.502 "name": "raid_bdev1", 00:21:31.502 "raid_level": "concat", 00:21:31.502 "base_bdevs": [ 00:21:31.502 "malloc1", 00:21:31.502 "malloc2" 00:21:31.502 ], 00:21:31.502 "strip_size_kb": 64, 00:21:31.502 "superblock": false, 00:21:31.502 "method": "bdev_raid_create", 00:21:31.502 "req_id": 1 00:21:31.502 } 00:21:31.502 Got JSON-RPC error response 00:21:31.502 response: 00:21:31.502 { 00:21:31.502 "code": -17, 00:21:31.502 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:31.502 } 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.502 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.760 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.761 [2024-11-20 07:18:55.820396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:31.761 [2024-11-20 07:18:55.820510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.761 [2024-11-20 07:18:55.820542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:31.761 [2024-11-20 07:18:55.820559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.761 [2024-11-20 07:18:55.823763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.761 [2024-11-20 07:18:55.823827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:31.761 [2024-11-20 07:18:55.823936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:31.761 [2024-11-20 07:18:55.824058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:31.761 pt1 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.761 "name": "raid_bdev1", 00:21:31.761 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:31.761 "strip_size_kb": 64, 00:21:31.761 "state": "configuring", 00:21:31.761 "raid_level": "concat", 00:21:31.761 "superblock": true, 00:21:31.761 "num_base_bdevs": 2, 00:21:31.761 "num_base_bdevs_discovered": 1, 00:21:31.761 "num_base_bdevs_operational": 2, 00:21:31.761 "base_bdevs_list": [ 00:21:31.761 { 00:21:31.761 "name": "pt1", 00:21:31.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:31.761 "is_configured": true, 00:21:31.761 "data_offset": 2048, 00:21:31.761 "data_size": 63488 00:21:31.761 }, 00:21:31.761 { 00:21:31.761 "name": null, 00:21:31.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:31.761 "is_configured": false, 00:21:31.761 "data_offset": 2048, 00:21:31.761 "data_size": 63488 00:21:31.761 } 00:21:31.761 ] 00:21:31.761 }' 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.761 07:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.328 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:32.328 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:32.328 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:32.328 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:32.328 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.328 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.328 [2024-11-20 07:18:56.372682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:32.328 [2024-11-20 07:18:56.372784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.328 [2024-11-20 07:18:56.372820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:32.328 [2024-11-20 07:18:56.372853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.329 [2024-11-20 07:18:56.373605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.329 [2024-11-20 07:18:56.373654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:32.329 [2024-11-20 07:18:56.373767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:32.329 [2024-11-20 07:18:56.373811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:32.329 [2024-11-20 07:18:56.373973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:32.329 [2024-11-20 07:18:56.373994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:32.329 [2024-11-20 07:18:56.374285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:32.329 [2024-11-20 07:18:56.374476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:32.329 [2024-11-20 07:18:56.374492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:32.329 [2024-11-20 07:18:56.374720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.329 pt2 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.329 "name": "raid_bdev1", 00:21:32.329 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:32.329 "strip_size_kb": 64, 00:21:32.329 "state": "online", 00:21:32.329 "raid_level": "concat", 00:21:32.329 "superblock": true, 00:21:32.329 "num_base_bdevs": 2, 00:21:32.329 "num_base_bdevs_discovered": 2, 00:21:32.329 "num_base_bdevs_operational": 2, 00:21:32.329 "base_bdevs_list": [ 00:21:32.329 { 00:21:32.329 "name": "pt1", 00:21:32.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:32.329 "is_configured": true, 00:21:32.329 "data_offset": 2048, 00:21:32.329 "data_size": 63488 00:21:32.329 }, 00:21:32.329 { 00:21:32.329 "name": "pt2", 00:21:32.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.329 "is_configured": true, 00:21:32.329 "data_offset": 2048, 00:21:32.329 "data_size": 63488 00:21:32.329 } 00:21:32.329 ] 00:21:32.329 }' 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.329 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:32.921 [2024-11-20 07:18:56.937214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.921 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.921 "name": "raid_bdev1", 00:21:32.921 "aliases": [ 00:21:32.921 "308ba973-28af-4a09-bfd0-3c9b43424f58" 00:21:32.921 ], 00:21:32.921 "product_name": "Raid Volume", 00:21:32.921 "block_size": 512, 00:21:32.921 "num_blocks": 126976, 00:21:32.921 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:32.921 "assigned_rate_limits": { 00:21:32.921 "rw_ios_per_sec": 0, 00:21:32.921 "rw_mbytes_per_sec": 0, 00:21:32.921 "r_mbytes_per_sec": 0, 00:21:32.921 "w_mbytes_per_sec": 0 00:21:32.921 }, 00:21:32.921 "claimed": false, 00:21:32.922 "zoned": false, 00:21:32.922 "supported_io_types": { 00:21:32.922 "read": true, 00:21:32.922 "write": true, 00:21:32.922 "unmap": true, 00:21:32.922 "flush": true, 00:21:32.922 "reset": true, 00:21:32.922 "nvme_admin": false, 00:21:32.922 "nvme_io": false, 00:21:32.922 "nvme_io_md": false, 00:21:32.922 "write_zeroes": true, 00:21:32.922 "zcopy": false, 00:21:32.922 "get_zone_info": false, 00:21:32.922 "zone_management": false, 00:21:32.922 "zone_append": false, 00:21:32.922 "compare": false, 00:21:32.922 "compare_and_write": false, 00:21:32.922 "abort": false, 00:21:32.922 "seek_hole": false, 00:21:32.922 "seek_data": false, 00:21:32.922 "copy": false, 00:21:32.922 "nvme_iov_md": false 00:21:32.922 }, 00:21:32.922 "memory_domains": [ 00:21:32.922 { 00:21:32.922 "dma_device_id": "system", 00:21:32.922 "dma_device_type": 1 00:21:32.922 }, 00:21:32.922 { 00:21:32.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.922 "dma_device_type": 2 00:21:32.922 }, 00:21:32.922 { 00:21:32.922 "dma_device_id": "system", 00:21:32.922 "dma_device_type": 1 00:21:32.922 }, 00:21:32.922 { 00:21:32.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.922 "dma_device_type": 2 00:21:32.922 } 00:21:32.922 ], 00:21:32.922 "driver_specific": { 00:21:32.922 "raid": { 00:21:32.922 "uuid": "308ba973-28af-4a09-bfd0-3c9b43424f58", 00:21:32.922 "strip_size_kb": 64, 00:21:32.922 "state": "online", 00:21:32.922 "raid_level": "concat", 00:21:32.922 "superblock": true, 00:21:32.922 "num_base_bdevs": 2, 00:21:32.922 "num_base_bdevs_discovered": 2, 00:21:32.922 "num_base_bdevs_operational": 2, 00:21:32.922 "base_bdevs_list": [ 00:21:32.922 { 00:21:32.922 "name": "pt1", 00:21:32.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:32.922 "is_configured": true, 00:21:32.922 "data_offset": 2048, 00:21:32.922 "data_size": 63488 00:21:32.922 }, 00:21:32.922 { 00:21:32.922 "name": "pt2", 00:21:32.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.922 "is_configured": true, 00:21:32.922 "data_offset": 2048, 00:21:32.922 "data_size": 63488 00:21:32.922 } 00:21:32.922 ] 00:21:32.922 } 00:21:32.922 } 00:21:32.922 }' 00:21:32.922 07:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:32.922 pt2' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.922 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:33.182 [2024-11-20 07:18:57.217263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 308ba973-28af-4a09-bfd0-3c9b43424f58 '!=' 308ba973-28af-4a09-bfd0-3c9b43424f58 ']' 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62374 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62374 ']' 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62374 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62374 00:21:33.182 killing process with pid 62374 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62374' 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62374 00:21:33.182 [2024-11-20 07:18:57.301781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.182 07:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62374 00:21:33.182 [2024-11-20 07:18:57.301900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.182 [2024-11-20 07:18:57.302042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.183 [2024-11-20 07:18:57.302060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:33.440 [2024-11-20 07:18:57.491362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.376 07:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:34.376 00:21:34.376 real 0m4.914s 00:21:34.376 user 0m7.246s 00:21:34.376 sys 0m0.729s 00:21:34.376 07:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.376 ************************************ 00:21:34.376 END TEST raid_superblock_test 00:21:34.376 ************************************ 00:21:34.376 07:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.376 07:18:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:21:34.376 07:18:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:34.376 07:18:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.376 07:18:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.376 ************************************ 00:21:34.376 START TEST raid_read_error_test 00:21:34.376 ************************************ 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1JXZoCWz6a 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62591 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62591 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62591 ']' 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.376 07:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.635 [2024-11-20 07:18:58.711284] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:34.635 [2024-11-20 07:18:58.711493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62591 ] 00:21:34.635 [2024-11-20 07:18:58.893238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.894 [2024-11-20 07:18:59.026701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.153 [2024-11-20 07:18:59.230776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.153 [2024-11-20 07:18:59.230861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.447 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.447 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:35.447 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 BaseBdev1_malloc 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 true 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 [2024-11-20 07:18:59.797690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:35.705 [2024-11-20 07:18:59.797757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.705 [2024-11-20 07:18:59.797788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:35.705 [2024-11-20 07:18:59.797807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.705 [2024-11-20 07:18:59.800710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.705 [2024-11-20 07:18:59.800760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:35.705 BaseBdev1 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 BaseBdev2_malloc 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 true 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 [2024-11-20 07:18:59.858877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:35.705 [2024-11-20 07:18:59.858945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.705 [2024-11-20 07:18:59.858972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:35.705 [2024-11-20 07:18:59.858991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.705 [2024-11-20 07:18:59.861731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.705 [2024-11-20 07:18:59.861779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:35.705 BaseBdev2 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.705 [2024-11-20 07:18:59.866968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.705 [2024-11-20 07:18:59.869357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.705 [2024-11-20 07:18:59.869646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:35.705 [2024-11-20 07:18:59.869680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:35.705 [2024-11-20 07:18:59.869980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:35.705 [2024-11-20 07:18:59.870231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:35.705 [2024-11-20 07:18:59.870261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:35.705 [2024-11-20 07:18:59.870449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.705 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.706 "name": "raid_bdev1", 00:21:35.706 "uuid": "eb250b0f-c649-454d-bd0e-3413af524e40", 00:21:35.706 "strip_size_kb": 64, 00:21:35.706 "state": "online", 00:21:35.706 "raid_level": "concat", 00:21:35.706 "superblock": true, 00:21:35.706 "num_base_bdevs": 2, 00:21:35.706 "num_base_bdevs_discovered": 2, 00:21:35.706 "num_base_bdevs_operational": 2, 00:21:35.706 "base_bdevs_list": [ 00:21:35.706 { 00:21:35.706 "name": "BaseBdev1", 00:21:35.706 "uuid": "f237ad6a-3600-5b2b-960b-57e97537ff7d", 00:21:35.706 "is_configured": true, 00:21:35.706 "data_offset": 2048, 00:21:35.706 "data_size": 63488 00:21:35.706 }, 00:21:35.706 { 00:21:35.706 "name": "BaseBdev2", 00:21:35.706 "uuid": "c826b293-7ea1-5165-9824-fed0086a2921", 00:21:35.706 "is_configured": true, 00:21:35.706 "data_offset": 2048, 00:21:35.706 "data_size": 63488 00:21:35.706 } 00:21:35.706 ] 00:21:35.706 }' 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.706 07:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.271 07:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:36.271 07:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:36.271 [2024-11-20 07:19:00.504556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.207 "name": "raid_bdev1", 00:21:37.207 "uuid": "eb250b0f-c649-454d-bd0e-3413af524e40", 00:21:37.207 "strip_size_kb": 64, 00:21:37.207 "state": "online", 00:21:37.207 "raid_level": "concat", 00:21:37.207 "superblock": true, 00:21:37.207 "num_base_bdevs": 2, 00:21:37.207 "num_base_bdevs_discovered": 2, 00:21:37.207 "num_base_bdevs_operational": 2, 00:21:37.207 "base_bdevs_list": [ 00:21:37.207 { 00:21:37.207 "name": "BaseBdev1", 00:21:37.207 "uuid": "f237ad6a-3600-5b2b-960b-57e97537ff7d", 00:21:37.207 "is_configured": true, 00:21:37.207 "data_offset": 2048, 00:21:37.207 "data_size": 63488 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "name": "BaseBdev2", 00:21:37.207 "uuid": "c826b293-7ea1-5165-9824-fed0086a2921", 00:21:37.207 "is_configured": true, 00:21:37.207 "data_offset": 2048, 00:21:37.207 "data_size": 63488 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }' 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.207 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.774 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:37.774 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.774 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.774 [2024-11-20 07:19:01.919720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.774 [2024-11-20 07:19:01.919766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.774 [2024-11-20 07:19:01.923343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.775 [2024-11-20 07:19:01.923441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.775 [2024-11-20 07:19:01.923483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.775 [2024-11-20 07:19:01.923503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:37.775 { 00:21:37.775 "results": [ 00:21:37.775 { 00:21:37.775 "job": "raid_bdev1", 00:21:37.775 "core_mask": "0x1", 00:21:37.775 "workload": "randrw", 00:21:37.775 "percentage": 50, 00:21:37.775 "status": "finished", 00:21:37.775 "queue_depth": 1, 00:21:37.775 "io_size": 131072, 00:21:37.775 "runtime": 1.412713, 00:21:37.775 "iops": 10496.116337854894, 00:21:37.775 "mibps": 1312.0145422318617, 00:21:37.775 "io_failed": 1, 00:21:37.775 "io_timeout": 0, 00:21:37.775 "avg_latency_us": 133.60660229648292, 00:21:37.775 "min_latency_us": 38.86545454545455, 00:21:37.775 "max_latency_us": 1980.9745454545455 00:21:37.775 } 00:21:37.775 ], 00:21:37.775 "core_count": 1 00:21:37.775 } 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62591 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62591 ']' 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62591 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62591 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62591' 00:21:37.775 killing process with pid 62591 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62591 00:21:37.775 [2024-11-20 07:19:01.958668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.775 07:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62591 00:21:38.034 [2024-11-20 07:19:02.084925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1JXZoCWz6a 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:21:38.968 00:21:38.968 real 0m4.615s 00:21:38.968 user 0m5.809s 00:21:38.968 sys 0m0.566s 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.968 07:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.968 ************************************ 00:21:38.968 END TEST raid_read_error_test 00:21:38.968 ************************************ 00:21:38.968 07:19:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:21:38.968 07:19:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:38.968 07:19:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.968 07:19:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:39.227 ************************************ 00:21:39.227 START TEST raid_write_error_test 00:21:39.227 ************************************ 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:39.227 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6d85Ngalg6 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62737 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62737 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62737 ']' 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.228 07:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.228 [2024-11-20 07:19:03.374252] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:39.228 [2024-11-20 07:19:03.374430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:21:39.486 [2024-11-20 07:19:03.556813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.486 [2024-11-20 07:19:03.712488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.745 [2024-11-20 07:19:03.922754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.745 [2024-11-20 07:19:03.922797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 BaseBdev1_malloc 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 true 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 [2024-11-20 07:19:04.394545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:40.313 [2024-11-20 07:19:04.394664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.313 [2024-11-20 07:19:04.394695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:40.313 [2024-11-20 07:19:04.394719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.313 [2024-11-20 07:19:04.397481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.313 [2024-11-20 07:19:04.397576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:40.313 BaseBdev1 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 BaseBdev2_malloc 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 true 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 [2024-11-20 07:19:04.469407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:40.313 [2024-11-20 07:19:04.469476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.313 [2024-11-20 07:19:04.469501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:40.313 [2024-11-20 07:19:04.469519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.313 [2024-11-20 07:19:04.472324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.313 [2024-11-20 07:19:04.472374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:40.313 BaseBdev2 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 [2024-11-20 07:19:04.477483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.313 [2024-11-20 07:19:04.480080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.313 [2024-11-20 07:19:04.480365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:40.313 [2024-11-20 07:19:04.480400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:40.313 [2024-11-20 07:19:04.480715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:40.313 [2024-11-20 07:19:04.480958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:40.313 [2024-11-20 07:19:04.480987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:40.313 [2024-11-20 07:19:04.481175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.313 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.314 "name": "raid_bdev1", 00:21:40.314 "uuid": "f342639e-116e-4484-b14b-939fa74015ea", 00:21:40.314 "strip_size_kb": 64, 00:21:40.314 "state": "online", 00:21:40.314 "raid_level": "concat", 00:21:40.314 "superblock": true, 00:21:40.314 "num_base_bdevs": 2, 00:21:40.314 "num_base_bdevs_discovered": 2, 00:21:40.314 "num_base_bdevs_operational": 2, 00:21:40.314 "base_bdevs_list": [ 00:21:40.314 { 00:21:40.314 "name": "BaseBdev1", 00:21:40.314 "uuid": "d97df373-7167-56bb-8a62-5561527649f1", 00:21:40.314 "is_configured": true, 00:21:40.314 "data_offset": 2048, 00:21:40.314 "data_size": 63488 00:21:40.314 }, 00:21:40.314 { 00:21:40.314 "name": "BaseBdev2", 00:21:40.314 "uuid": "a899be88-a0e3-5592-9fc0-17a874519d40", 00:21:40.314 "is_configured": true, 00:21:40.314 "data_offset": 2048, 00:21:40.314 "data_size": 63488 00:21:40.314 } 00:21:40.314 ] 00:21:40.314 }' 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.314 07:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.880 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:40.880 07:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:40.880 [2024-11-20 07:19:05.127143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.816 "name": "raid_bdev1", 00:21:41.816 "uuid": "f342639e-116e-4484-b14b-939fa74015ea", 00:21:41.816 "strip_size_kb": 64, 00:21:41.816 "state": "online", 00:21:41.816 "raid_level": "concat", 00:21:41.816 "superblock": true, 00:21:41.816 "num_base_bdevs": 2, 00:21:41.816 "num_base_bdevs_discovered": 2, 00:21:41.816 "num_base_bdevs_operational": 2, 00:21:41.816 "base_bdevs_list": [ 00:21:41.816 { 00:21:41.816 "name": "BaseBdev1", 00:21:41.816 "uuid": "d97df373-7167-56bb-8a62-5561527649f1", 00:21:41.816 "is_configured": true, 00:21:41.816 "data_offset": 2048, 00:21:41.816 "data_size": 63488 00:21:41.816 }, 00:21:41.816 { 00:21:41.816 "name": "BaseBdev2", 00:21:41.816 "uuid": "a899be88-a0e3-5592-9fc0-17a874519d40", 00:21:41.816 "is_configured": true, 00:21:41.816 "data_offset": 2048, 00:21:41.816 "data_size": 63488 00:21:41.816 } 00:21:41.816 ] 00:21:41.816 }' 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.816 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.383 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:42.383 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.383 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.383 [2024-11-20 07:19:06.534424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.383 [2024-11-20 07:19:06.534485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.383 [2024-11-20 07:19:06.537920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.383 [2024-11-20 07:19:06.537981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.383 [2024-11-20 07:19:06.538055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.384 [2024-11-20 07:19:06.538075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:42.384 { 00:21:42.384 "results": [ 00:21:42.384 { 00:21:42.384 "job": "raid_bdev1", 00:21:42.384 "core_mask": "0x1", 00:21:42.384 "workload": "randrw", 00:21:42.384 "percentage": 50, 00:21:42.384 "status": "finished", 00:21:42.384 "queue_depth": 1, 00:21:42.384 "io_size": 131072, 00:21:42.384 "runtime": 1.404787, 00:21:42.384 "iops": 10677.775349572568, 00:21:42.384 "mibps": 1334.721918696571, 00:21:42.384 "io_failed": 1, 00:21:42.384 "io_timeout": 0, 00:21:42.384 "avg_latency_us": 130.47559544515215, 00:21:42.384 "min_latency_us": 38.167272727272724, 00:21:42.384 "max_latency_us": 1876.7127272727273 00:21:42.384 } 00:21:42.384 ], 00:21:42.384 "core_count": 1 00:21:42.384 } 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62737 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62737 ']' 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62737 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62737 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.384 killing process with pid 62737 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62737' 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62737 00:21:42.384 [2024-11-20 07:19:06.576196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:42.384 07:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62737 00:21:42.642 [2024-11-20 07:19:06.702732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6d85Ngalg6 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:21:43.578 00:21:43.578 real 0m4.530s 00:21:43.578 user 0m5.671s 00:21:43.578 sys 0m0.548s 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.578 07:19:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.578 ************************************ 00:21:43.578 END TEST raid_write_error_test 00:21:43.578 ************************************ 00:21:43.578 07:19:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:43.578 07:19:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:21:43.578 07:19:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:43.578 07:19:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.578 07:19:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:43.578 ************************************ 00:21:43.578 START TEST raid_state_function_test 00:21:43.578 ************************************ 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62875 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:43.578 Process raid pid: 62875 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62875' 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62875 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62875 ']' 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.578 07:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 [2024-11-20 07:19:07.957665] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:43.837 [2024-11-20 07:19:07.957864] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.096 [2024-11-20 07:19:08.143679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.096 [2024-11-20 07:19:08.275080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.354 [2024-11-20 07:19:08.495456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.354 [2024-11-20 07:19:08.495538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.922 [2024-11-20 07:19:08.910800] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:44.922 [2024-11-20 07:19:08.910865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:44.922 [2024-11-20 07:19:08.910882] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:44.922 [2024-11-20 07:19:08.910899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.922 "name": "Existed_Raid", 00:21:44.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.922 "strip_size_kb": 0, 00:21:44.922 "state": "configuring", 00:21:44.922 "raid_level": "raid1", 00:21:44.922 "superblock": false, 00:21:44.922 "num_base_bdevs": 2, 00:21:44.922 "num_base_bdevs_discovered": 0, 00:21:44.922 "num_base_bdevs_operational": 2, 00:21:44.922 "base_bdevs_list": [ 00:21:44.922 { 00:21:44.922 "name": "BaseBdev1", 00:21:44.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.922 "is_configured": false, 00:21:44.922 "data_offset": 0, 00:21:44.922 "data_size": 0 00:21:44.922 }, 00:21:44.922 { 00:21:44.922 "name": "BaseBdev2", 00:21:44.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.922 "is_configured": false, 00:21:44.922 "data_offset": 0, 00:21:44.922 "data_size": 0 00:21:44.922 } 00:21:44.922 ] 00:21:44.922 }' 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.922 07:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 [2024-11-20 07:19:09.402907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:45.180 [2024-11-20 07:19:09.402954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 [2024-11-20 07:19:09.410865] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.180 [2024-11-20 07:19:09.410921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.180 [2024-11-20 07:19:09.410940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.180 [2024-11-20 07:19:09.410959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 [2024-11-20 07:19:09.456364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.180 BaseBdev1 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.180 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 [ 00:21:45.439 { 00:21:45.439 "name": "BaseBdev1", 00:21:45.439 "aliases": [ 00:21:45.439 "7c0520d1-33a4-428f-9023-105deec1f5b0" 00:21:45.439 ], 00:21:45.439 "product_name": "Malloc disk", 00:21:45.439 "block_size": 512, 00:21:45.439 "num_blocks": 65536, 00:21:45.439 "uuid": "7c0520d1-33a4-428f-9023-105deec1f5b0", 00:21:45.439 "assigned_rate_limits": { 00:21:45.439 "rw_ios_per_sec": 0, 00:21:45.439 "rw_mbytes_per_sec": 0, 00:21:45.439 "r_mbytes_per_sec": 0, 00:21:45.439 "w_mbytes_per_sec": 0 00:21:45.439 }, 00:21:45.439 "claimed": true, 00:21:45.439 "claim_type": "exclusive_write", 00:21:45.439 "zoned": false, 00:21:45.439 "supported_io_types": { 00:21:45.439 "read": true, 00:21:45.439 "write": true, 00:21:45.439 "unmap": true, 00:21:45.439 "flush": true, 00:21:45.439 "reset": true, 00:21:45.439 "nvme_admin": false, 00:21:45.439 "nvme_io": false, 00:21:45.439 "nvme_io_md": false, 00:21:45.439 "write_zeroes": true, 00:21:45.439 "zcopy": true, 00:21:45.439 "get_zone_info": false, 00:21:45.439 "zone_management": false, 00:21:45.439 "zone_append": false, 00:21:45.439 "compare": false, 00:21:45.439 "compare_and_write": false, 00:21:45.439 "abort": true, 00:21:45.439 "seek_hole": false, 00:21:45.439 "seek_data": false, 00:21:45.439 "copy": true, 00:21:45.439 "nvme_iov_md": false 00:21:45.439 }, 00:21:45.439 "memory_domains": [ 00:21:45.439 { 00:21:45.439 "dma_device_id": "system", 00:21:45.439 "dma_device_type": 1 00:21:45.439 }, 00:21:45.439 { 00:21:45.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.439 "dma_device_type": 2 00:21:45.439 } 00:21:45.439 ], 00:21:45.439 "driver_specific": {} 00:21:45.439 } 00:21:45.439 ] 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.439 "name": "Existed_Raid", 00:21:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.439 "strip_size_kb": 0, 00:21:45.439 "state": "configuring", 00:21:45.439 "raid_level": "raid1", 00:21:45.439 "superblock": false, 00:21:45.439 "num_base_bdevs": 2, 00:21:45.439 "num_base_bdevs_discovered": 1, 00:21:45.439 "num_base_bdevs_operational": 2, 00:21:45.439 "base_bdevs_list": [ 00:21:45.439 { 00:21:45.439 "name": "BaseBdev1", 00:21:45.439 "uuid": "7c0520d1-33a4-428f-9023-105deec1f5b0", 00:21:45.439 "is_configured": true, 00:21:45.439 "data_offset": 0, 00:21:45.439 "data_size": 65536 00:21:45.439 }, 00:21:45.439 { 00:21:45.439 "name": "BaseBdev2", 00:21:45.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.439 "is_configured": false, 00:21:45.439 "data_offset": 0, 00:21:45.439 "data_size": 0 00:21:45.439 } 00:21:45.439 ] 00:21:45.439 }' 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.439 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.698 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:45.698 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.698 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 [2024-11-20 07:19:09.988687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:45.957 [2024-11-20 07:19:09.988755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:45.957 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.957 07:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:45.957 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.957 07:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 [2024-11-20 07:19:09.996742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.957 [2024-11-20 07:19:09.999359] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.957 [2024-11-20 07:19:09.999416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.957 "name": "Existed_Raid", 00:21:45.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.957 "strip_size_kb": 0, 00:21:45.957 "state": "configuring", 00:21:45.957 "raid_level": "raid1", 00:21:45.957 "superblock": false, 00:21:45.957 "num_base_bdevs": 2, 00:21:45.957 "num_base_bdevs_discovered": 1, 00:21:45.957 "num_base_bdevs_operational": 2, 00:21:45.957 "base_bdevs_list": [ 00:21:45.957 { 00:21:45.957 "name": "BaseBdev1", 00:21:45.957 "uuid": "7c0520d1-33a4-428f-9023-105deec1f5b0", 00:21:45.957 "is_configured": true, 00:21:45.957 "data_offset": 0, 00:21:45.957 "data_size": 65536 00:21:45.957 }, 00:21:45.957 { 00:21:45.957 "name": "BaseBdev2", 00:21:45.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.957 "is_configured": false, 00:21:45.957 "data_offset": 0, 00:21:45.957 "data_size": 0 00:21:45.957 } 00:21:45.957 ] 00:21:45.957 }' 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.957 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.525 [2024-11-20 07:19:10.562056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.525 [2024-11-20 07:19:10.562134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:46.525 [2024-11-20 07:19:10.562149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:46.525 [2024-11-20 07:19:10.562502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:46.525 [2024-11-20 07:19:10.562773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:46.525 [2024-11-20 07:19:10.562809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:46.525 [2024-11-20 07:19:10.563124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.525 BaseBdev2 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.525 [ 00:21:46.525 { 00:21:46.525 "name": "BaseBdev2", 00:21:46.525 "aliases": [ 00:21:46.525 "8e8970b1-09cf-4f03-8004-21e84b893413" 00:21:46.525 ], 00:21:46.525 "product_name": "Malloc disk", 00:21:46.525 "block_size": 512, 00:21:46.525 "num_blocks": 65536, 00:21:46.525 "uuid": "8e8970b1-09cf-4f03-8004-21e84b893413", 00:21:46.525 "assigned_rate_limits": { 00:21:46.525 "rw_ios_per_sec": 0, 00:21:46.525 "rw_mbytes_per_sec": 0, 00:21:46.525 "r_mbytes_per_sec": 0, 00:21:46.525 "w_mbytes_per_sec": 0 00:21:46.525 }, 00:21:46.525 "claimed": true, 00:21:46.525 "claim_type": "exclusive_write", 00:21:46.525 "zoned": false, 00:21:46.525 "supported_io_types": { 00:21:46.525 "read": true, 00:21:46.525 "write": true, 00:21:46.525 "unmap": true, 00:21:46.525 "flush": true, 00:21:46.525 "reset": true, 00:21:46.525 "nvme_admin": false, 00:21:46.525 "nvme_io": false, 00:21:46.525 "nvme_io_md": false, 00:21:46.525 "write_zeroes": true, 00:21:46.525 "zcopy": true, 00:21:46.525 "get_zone_info": false, 00:21:46.525 "zone_management": false, 00:21:46.525 "zone_append": false, 00:21:46.525 "compare": false, 00:21:46.525 "compare_and_write": false, 00:21:46.525 "abort": true, 00:21:46.525 "seek_hole": false, 00:21:46.525 "seek_data": false, 00:21:46.525 "copy": true, 00:21:46.525 "nvme_iov_md": false 00:21:46.525 }, 00:21:46.525 "memory_domains": [ 00:21:46.525 { 00:21:46.525 "dma_device_id": "system", 00:21:46.525 "dma_device_type": 1 00:21:46.525 }, 00:21:46.525 { 00:21:46.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.525 "dma_device_type": 2 00:21:46.525 } 00:21:46.525 ], 00:21:46.525 "driver_specific": {} 00:21:46.525 } 00:21:46.525 ] 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.525 "name": "Existed_Raid", 00:21:46.525 "uuid": "2918d144-2b36-4d7c-962f-1e0512ca0f49", 00:21:46.525 "strip_size_kb": 0, 00:21:46.525 "state": "online", 00:21:46.525 "raid_level": "raid1", 00:21:46.525 "superblock": false, 00:21:46.525 "num_base_bdevs": 2, 00:21:46.525 "num_base_bdevs_discovered": 2, 00:21:46.525 "num_base_bdevs_operational": 2, 00:21:46.525 "base_bdevs_list": [ 00:21:46.525 { 00:21:46.525 "name": "BaseBdev1", 00:21:46.525 "uuid": "7c0520d1-33a4-428f-9023-105deec1f5b0", 00:21:46.525 "is_configured": true, 00:21:46.525 "data_offset": 0, 00:21:46.525 "data_size": 65536 00:21:46.525 }, 00:21:46.525 { 00:21:46.525 "name": "BaseBdev2", 00:21:46.525 "uuid": "8e8970b1-09cf-4f03-8004-21e84b893413", 00:21:46.525 "is_configured": true, 00:21:46.525 "data_offset": 0, 00:21:46.525 "data_size": 65536 00:21:46.525 } 00:21:46.525 ] 00:21:46.525 }' 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.525 07:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.092 [2024-11-20 07:19:11.122730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:47.092 "name": "Existed_Raid", 00:21:47.092 "aliases": [ 00:21:47.092 "2918d144-2b36-4d7c-962f-1e0512ca0f49" 00:21:47.092 ], 00:21:47.092 "product_name": "Raid Volume", 00:21:47.092 "block_size": 512, 00:21:47.092 "num_blocks": 65536, 00:21:47.092 "uuid": "2918d144-2b36-4d7c-962f-1e0512ca0f49", 00:21:47.092 "assigned_rate_limits": { 00:21:47.092 "rw_ios_per_sec": 0, 00:21:47.092 "rw_mbytes_per_sec": 0, 00:21:47.092 "r_mbytes_per_sec": 0, 00:21:47.092 "w_mbytes_per_sec": 0 00:21:47.092 }, 00:21:47.092 "claimed": false, 00:21:47.092 "zoned": false, 00:21:47.092 "supported_io_types": { 00:21:47.092 "read": true, 00:21:47.092 "write": true, 00:21:47.092 "unmap": false, 00:21:47.092 "flush": false, 00:21:47.092 "reset": true, 00:21:47.092 "nvme_admin": false, 00:21:47.092 "nvme_io": false, 00:21:47.092 "nvme_io_md": false, 00:21:47.092 "write_zeroes": true, 00:21:47.092 "zcopy": false, 00:21:47.092 "get_zone_info": false, 00:21:47.092 "zone_management": false, 00:21:47.092 "zone_append": false, 00:21:47.092 "compare": false, 00:21:47.092 "compare_and_write": false, 00:21:47.092 "abort": false, 00:21:47.092 "seek_hole": false, 00:21:47.092 "seek_data": false, 00:21:47.092 "copy": false, 00:21:47.092 "nvme_iov_md": false 00:21:47.092 }, 00:21:47.092 "memory_domains": [ 00:21:47.092 { 00:21:47.092 "dma_device_id": "system", 00:21:47.092 "dma_device_type": 1 00:21:47.092 }, 00:21:47.092 { 00:21:47.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.092 "dma_device_type": 2 00:21:47.092 }, 00:21:47.092 { 00:21:47.092 "dma_device_id": "system", 00:21:47.092 "dma_device_type": 1 00:21:47.092 }, 00:21:47.092 { 00:21:47.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.092 "dma_device_type": 2 00:21:47.092 } 00:21:47.092 ], 00:21:47.092 "driver_specific": { 00:21:47.092 "raid": { 00:21:47.092 "uuid": "2918d144-2b36-4d7c-962f-1e0512ca0f49", 00:21:47.092 "strip_size_kb": 0, 00:21:47.092 "state": "online", 00:21:47.092 "raid_level": "raid1", 00:21:47.092 "superblock": false, 00:21:47.092 "num_base_bdevs": 2, 00:21:47.092 "num_base_bdevs_discovered": 2, 00:21:47.092 "num_base_bdevs_operational": 2, 00:21:47.092 "base_bdevs_list": [ 00:21:47.092 { 00:21:47.092 "name": "BaseBdev1", 00:21:47.092 "uuid": "7c0520d1-33a4-428f-9023-105deec1f5b0", 00:21:47.092 "is_configured": true, 00:21:47.092 "data_offset": 0, 00:21:47.092 "data_size": 65536 00:21:47.092 }, 00:21:47.092 { 00:21:47.092 "name": "BaseBdev2", 00:21:47.092 "uuid": "8e8970b1-09cf-4f03-8004-21e84b893413", 00:21:47.092 "is_configured": true, 00:21:47.092 "data_offset": 0, 00:21:47.092 "data_size": 65536 00:21:47.092 } 00:21:47.092 ] 00:21:47.092 } 00:21:47.092 } 00:21:47.092 }' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:47.092 BaseBdev2' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:47.092 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.093 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.093 [2024-11-20 07:19:11.350419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:47.351 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.352 "name": "Existed_Raid", 00:21:47.352 "uuid": "2918d144-2b36-4d7c-962f-1e0512ca0f49", 00:21:47.352 "strip_size_kb": 0, 00:21:47.352 "state": "online", 00:21:47.352 "raid_level": "raid1", 00:21:47.352 "superblock": false, 00:21:47.352 "num_base_bdevs": 2, 00:21:47.352 "num_base_bdevs_discovered": 1, 00:21:47.352 "num_base_bdevs_operational": 1, 00:21:47.352 "base_bdevs_list": [ 00:21:47.352 { 00:21:47.352 "name": null, 00:21:47.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.352 "is_configured": false, 00:21:47.352 "data_offset": 0, 00:21:47.352 "data_size": 65536 00:21:47.352 }, 00:21:47.352 { 00:21:47.352 "name": "BaseBdev2", 00:21:47.352 "uuid": "8e8970b1-09cf-4f03-8004-21e84b893413", 00:21:47.352 "is_configured": true, 00:21:47.352 "data_offset": 0, 00:21:47.352 "data_size": 65536 00:21:47.352 } 00:21:47.352 ] 00:21:47.352 }' 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.352 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.931 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.932 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:47.932 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:47.932 07:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:47.932 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.932 07:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.932 [2024-11-20 07:19:11.997358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:47.932 [2024-11-20 07:19:11.997483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:47.932 [2024-11-20 07:19:12.085856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:47.932 [2024-11-20 07:19:12.085922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:47.932 [2024-11-20 07:19:12.085943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62875 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62875 ']' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62875 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62875 00:21:47.932 killing process with pid 62875 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62875' 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62875 00:21:47.932 [2024-11-20 07:19:12.174799] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:47.932 07:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62875 00:21:47.932 [2024-11-20 07:19:12.190102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:49.334 00:21:49.334 real 0m5.405s 00:21:49.334 user 0m8.073s 00:21:49.334 sys 0m0.827s 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.334 ************************************ 00:21:49.334 END TEST raid_state_function_test 00:21:49.334 ************************************ 00:21:49.334 07:19:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:21:49.334 07:19:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:49.334 07:19:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.334 07:19:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.334 ************************************ 00:21:49.334 START TEST raid_state_function_test_sb 00:21:49.334 ************************************ 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:49.334 Process raid pid: 63139 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63139 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63139' 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63139 00:21:49.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63139 ']' 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.334 07:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.334 [2024-11-20 07:19:13.440476] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:49.334 [2024-11-20 07:19:13.440710] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.592 [2024-11-20 07:19:13.655835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.592 [2024-11-20 07:19:13.813573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.851 [2024-11-20 07:19:14.068782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.851 [2024-11-20 07:19:14.069185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:50.419 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.420 [2024-11-20 07:19:14.470191] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:50.420 [2024-11-20 07:19:14.470283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:50.420 [2024-11-20 07:19:14.470301] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.420 [2024-11-20 07:19:14.470319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.420 "name": "Existed_Raid", 00:21:50.420 "uuid": "6844810c-4ea1-42c8-9243-70029c0690fc", 00:21:50.420 "strip_size_kb": 0, 00:21:50.420 "state": "configuring", 00:21:50.420 "raid_level": "raid1", 00:21:50.420 "superblock": true, 00:21:50.420 "num_base_bdevs": 2, 00:21:50.420 "num_base_bdevs_discovered": 0, 00:21:50.420 "num_base_bdevs_operational": 2, 00:21:50.420 "base_bdevs_list": [ 00:21:50.420 { 00:21:50.420 "name": "BaseBdev1", 00:21:50.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.420 "is_configured": false, 00:21:50.420 "data_offset": 0, 00:21:50.420 "data_size": 0 00:21:50.420 }, 00:21:50.420 { 00:21:50.420 "name": "BaseBdev2", 00:21:50.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.420 "is_configured": false, 00:21:50.420 "data_offset": 0, 00:21:50.420 "data_size": 0 00:21:50.420 } 00:21:50.420 ] 00:21:50.420 }' 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.420 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.988 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:50.988 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 [2024-11-20 07:19:14.974303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.989 [2024-11-20 07:19:14.974368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 [2024-11-20 07:19:14.986327] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:50.989 [2024-11-20 07:19:14.986605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:50.989 [2024-11-20 07:19:14.986781] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.989 [2024-11-20 07:19:14.986849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.989 07:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 [2024-11-20 07:19:15.031502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.989 BaseBdev1 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 [ 00:21:50.989 { 00:21:50.989 "name": "BaseBdev1", 00:21:50.989 "aliases": [ 00:21:50.989 "f351d719-b653-4d51-b374-927244f6a9de" 00:21:50.989 ], 00:21:50.989 "product_name": "Malloc disk", 00:21:50.989 "block_size": 512, 00:21:50.989 "num_blocks": 65536, 00:21:50.989 "uuid": "f351d719-b653-4d51-b374-927244f6a9de", 00:21:50.989 "assigned_rate_limits": { 00:21:50.989 "rw_ios_per_sec": 0, 00:21:50.989 "rw_mbytes_per_sec": 0, 00:21:50.989 "r_mbytes_per_sec": 0, 00:21:50.989 "w_mbytes_per_sec": 0 00:21:50.989 }, 00:21:50.989 "claimed": true, 00:21:50.989 "claim_type": "exclusive_write", 00:21:50.989 "zoned": false, 00:21:50.989 "supported_io_types": { 00:21:50.989 "read": true, 00:21:50.989 "write": true, 00:21:50.989 "unmap": true, 00:21:50.989 "flush": true, 00:21:50.989 "reset": true, 00:21:50.989 "nvme_admin": false, 00:21:50.989 "nvme_io": false, 00:21:50.989 "nvme_io_md": false, 00:21:50.989 "write_zeroes": true, 00:21:50.989 "zcopy": true, 00:21:50.989 "get_zone_info": false, 00:21:50.989 "zone_management": false, 00:21:50.989 "zone_append": false, 00:21:50.989 "compare": false, 00:21:50.989 "compare_and_write": false, 00:21:50.989 "abort": true, 00:21:50.989 "seek_hole": false, 00:21:50.989 "seek_data": false, 00:21:50.989 "copy": true, 00:21:50.989 "nvme_iov_md": false 00:21:50.989 }, 00:21:50.989 "memory_domains": [ 00:21:50.989 { 00:21:50.989 "dma_device_id": "system", 00:21:50.989 "dma_device_type": 1 00:21:50.989 }, 00:21:50.989 { 00:21:50.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.989 "dma_device_type": 2 00:21:50.989 } 00:21:50.989 ], 00:21:50.989 "driver_specific": {} 00:21:50.989 } 00:21:50.989 ] 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.989 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.990 "name": "Existed_Raid", 00:21:50.990 "uuid": "7fcf692e-9a13-46e7-a516-6bbd6157e963", 00:21:50.990 "strip_size_kb": 0, 00:21:50.990 "state": "configuring", 00:21:50.990 "raid_level": "raid1", 00:21:50.990 "superblock": true, 00:21:50.990 "num_base_bdevs": 2, 00:21:50.990 "num_base_bdevs_discovered": 1, 00:21:50.990 "num_base_bdevs_operational": 2, 00:21:50.990 "base_bdevs_list": [ 00:21:50.990 { 00:21:50.990 "name": "BaseBdev1", 00:21:50.990 "uuid": "f351d719-b653-4d51-b374-927244f6a9de", 00:21:50.990 "is_configured": true, 00:21:50.990 "data_offset": 2048, 00:21:50.990 "data_size": 63488 00:21:50.990 }, 00:21:50.990 { 00:21:50.990 "name": "BaseBdev2", 00:21:50.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.990 "is_configured": false, 00:21:50.990 "data_offset": 0, 00:21:50.990 "data_size": 0 00:21:50.990 } 00:21:50.990 ] 00:21:50.990 }' 00:21:50.990 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.990 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 [2024-11-20 07:19:15.575754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:51.557 [2024-11-20 07:19:15.575822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 [2024-11-20 07:19:15.583770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:51.557 [2024-11-20 07:19:15.586289] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:51.557 [2024-11-20 07:19:15.586342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.557 "name": "Existed_Raid", 00:21:51.557 "uuid": "47619d21-48b5-4600-aef5-ab0aa446ff8d", 00:21:51.557 "strip_size_kb": 0, 00:21:51.557 "state": "configuring", 00:21:51.557 "raid_level": "raid1", 00:21:51.557 "superblock": true, 00:21:51.557 "num_base_bdevs": 2, 00:21:51.557 "num_base_bdevs_discovered": 1, 00:21:51.557 "num_base_bdevs_operational": 2, 00:21:51.557 "base_bdevs_list": [ 00:21:51.557 { 00:21:51.557 "name": "BaseBdev1", 00:21:51.557 "uuid": "f351d719-b653-4d51-b374-927244f6a9de", 00:21:51.557 "is_configured": true, 00:21:51.557 "data_offset": 2048, 00:21:51.557 "data_size": 63488 00:21:51.557 }, 00:21:51.557 { 00:21:51.557 "name": "BaseBdev2", 00:21:51.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.557 "is_configured": false, 00:21:51.557 "data_offset": 0, 00:21:51.557 "data_size": 0 00:21:51.557 } 00:21:51.557 ] 00:21:51.557 }' 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.557 07:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 [2024-11-20 07:19:16.182963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:52.125 [2024-11-20 07:19:16.183502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:52.125 [2024-11-20 07:19:16.183530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:52.125 BaseBdev2 00:21:52.125 [2024-11-20 07:19:16.183903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:52.125 [2024-11-20 07:19:16.184129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:52.125 [2024-11-20 07:19:16.184163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:52.125 [2024-11-20 07:19:16.184337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 [ 00:21:52.125 { 00:21:52.125 "name": "BaseBdev2", 00:21:52.125 "aliases": [ 00:21:52.125 "28ad0910-97f9-4f44-a09c-7d3b8e310219" 00:21:52.125 ], 00:21:52.125 "product_name": "Malloc disk", 00:21:52.125 "block_size": 512, 00:21:52.125 "num_blocks": 65536, 00:21:52.125 "uuid": "28ad0910-97f9-4f44-a09c-7d3b8e310219", 00:21:52.125 "assigned_rate_limits": { 00:21:52.125 "rw_ios_per_sec": 0, 00:21:52.125 "rw_mbytes_per_sec": 0, 00:21:52.125 "r_mbytes_per_sec": 0, 00:21:52.125 "w_mbytes_per_sec": 0 00:21:52.125 }, 00:21:52.125 "claimed": true, 00:21:52.125 "claim_type": "exclusive_write", 00:21:52.125 "zoned": false, 00:21:52.125 "supported_io_types": { 00:21:52.125 "read": true, 00:21:52.125 "write": true, 00:21:52.125 "unmap": true, 00:21:52.125 "flush": true, 00:21:52.125 "reset": true, 00:21:52.125 "nvme_admin": false, 00:21:52.125 "nvme_io": false, 00:21:52.125 "nvme_io_md": false, 00:21:52.125 "write_zeroes": true, 00:21:52.125 "zcopy": true, 00:21:52.125 "get_zone_info": false, 00:21:52.125 "zone_management": false, 00:21:52.125 "zone_append": false, 00:21:52.125 "compare": false, 00:21:52.125 "compare_and_write": false, 00:21:52.125 "abort": true, 00:21:52.125 "seek_hole": false, 00:21:52.125 "seek_data": false, 00:21:52.125 "copy": true, 00:21:52.125 "nvme_iov_md": false 00:21:52.125 }, 00:21:52.125 "memory_domains": [ 00:21:52.125 { 00:21:52.125 "dma_device_id": "system", 00:21:52.125 "dma_device_type": 1 00:21:52.125 }, 00:21:52.125 { 00:21:52.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.125 "dma_device_type": 2 00:21:52.125 } 00:21:52.125 ], 00:21:52.125 "driver_specific": {} 00:21:52.125 } 00:21:52.125 ] 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.125 "name": "Existed_Raid", 00:21:52.125 "uuid": "47619d21-48b5-4600-aef5-ab0aa446ff8d", 00:21:52.125 "strip_size_kb": 0, 00:21:52.125 "state": "online", 00:21:52.125 "raid_level": "raid1", 00:21:52.125 "superblock": true, 00:21:52.125 "num_base_bdevs": 2, 00:21:52.125 "num_base_bdevs_discovered": 2, 00:21:52.125 "num_base_bdevs_operational": 2, 00:21:52.125 "base_bdevs_list": [ 00:21:52.125 { 00:21:52.125 "name": "BaseBdev1", 00:21:52.125 "uuid": "f351d719-b653-4d51-b374-927244f6a9de", 00:21:52.125 "is_configured": true, 00:21:52.125 "data_offset": 2048, 00:21:52.125 "data_size": 63488 00:21:52.125 }, 00:21:52.125 { 00:21:52.125 "name": "BaseBdev2", 00:21:52.125 "uuid": "28ad0910-97f9-4f44-a09c-7d3b8e310219", 00:21:52.125 "is_configured": true, 00:21:52.125 "data_offset": 2048, 00:21:52.125 "data_size": 63488 00:21:52.125 } 00:21:52.125 ] 00:21:52.125 }' 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.125 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:52.694 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.695 [2024-11-20 07:19:16.743662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:52.695 "name": "Existed_Raid", 00:21:52.695 "aliases": [ 00:21:52.695 "47619d21-48b5-4600-aef5-ab0aa446ff8d" 00:21:52.695 ], 00:21:52.695 "product_name": "Raid Volume", 00:21:52.695 "block_size": 512, 00:21:52.695 "num_blocks": 63488, 00:21:52.695 "uuid": "47619d21-48b5-4600-aef5-ab0aa446ff8d", 00:21:52.695 "assigned_rate_limits": { 00:21:52.695 "rw_ios_per_sec": 0, 00:21:52.695 "rw_mbytes_per_sec": 0, 00:21:52.695 "r_mbytes_per_sec": 0, 00:21:52.695 "w_mbytes_per_sec": 0 00:21:52.695 }, 00:21:52.695 "claimed": false, 00:21:52.695 "zoned": false, 00:21:52.695 "supported_io_types": { 00:21:52.695 "read": true, 00:21:52.695 "write": true, 00:21:52.695 "unmap": false, 00:21:52.695 "flush": false, 00:21:52.695 "reset": true, 00:21:52.695 "nvme_admin": false, 00:21:52.695 "nvme_io": false, 00:21:52.695 "nvme_io_md": false, 00:21:52.695 "write_zeroes": true, 00:21:52.695 "zcopy": false, 00:21:52.695 "get_zone_info": false, 00:21:52.695 "zone_management": false, 00:21:52.695 "zone_append": false, 00:21:52.695 "compare": false, 00:21:52.695 "compare_and_write": false, 00:21:52.695 "abort": false, 00:21:52.695 "seek_hole": false, 00:21:52.695 "seek_data": false, 00:21:52.695 "copy": false, 00:21:52.695 "nvme_iov_md": false 00:21:52.695 }, 00:21:52.695 "memory_domains": [ 00:21:52.695 { 00:21:52.695 "dma_device_id": "system", 00:21:52.695 "dma_device_type": 1 00:21:52.695 }, 00:21:52.695 { 00:21:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.695 "dma_device_type": 2 00:21:52.695 }, 00:21:52.695 { 00:21:52.695 "dma_device_id": "system", 00:21:52.695 "dma_device_type": 1 00:21:52.695 }, 00:21:52.695 { 00:21:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.695 "dma_device_type": 2 00:21:52.695 } 00:21:52.695 ], 00:21:52.695 "driver_specific": { 00:21:52.695 "raid": { 00:21:52.695 "uuid": "47619d21-48b5-4600-aef5-ab0aa446ff8d", 00:21:52.695 "strip_size_kb": 0, 00:21:52.695 "state": "online", 00:21:52.695 "raid_level": "raid1", 00:21:52.695 "superblock": true, 00:21:52.695 "num_base_bdevs": 2, 00:21:52.695 "num_base_bdevs_discovered": 2, 00:21:52.695 "num_base_bdevs_operational": 2, 00:21:52.695 "base_bdevs_list": [ 00:21:52.695 { 00:21:52.695 "name": "BaseBdev1", 00:21:52.695 "uuid": "f351d719-b653-4d51-b374-927244f6a9de", 00:21:52.695 "is_configured": true, 00:21:52.695 "data_offset": 2048, 00:21:52.695 "data_size": 63488 00:21:52.695 }, 00:21:52.695 { 00:21:52.695 "name": "BaseBdev2", 00:21:52.695 "uuid": "28ad0910-97f9-4f44-a09c-7d3b8e310219", 00:21:52.695 "is_configured": true, 00:21:52.695 "data_offset": 2048, 00:21:52.695 "data_size": 63488 00:21:52.695 } 00:21:52.695 ] 00:21:52.695 } 00:21:52.695 } 00:21:52.695 }' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:52.695 BaseBdev2' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.695 07:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.954 07:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.954 [2024-11-20 07:19:17.007396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.954 "name": "Existed_Raid", 00:21:52.954 "uuid": "47619d21-48b5-4600-aef5-ab0aa446ff8d", 00:21:52.954 "strip_size_kb": 0, 00:21:52.954 "state": "online", 00:21:52.954 "raid_level": "raid1", 00:21:52.954 "superblock": true, 00:21:52.954 "num_base_bdevs": 2, 00:21:52.954 "num_base_bdevs_discovered": 1, 00:21:52.954 "num_base_bdevs_operational": 1, 00:21:52.954 "base_bdevs_list": [ 00:21:52.954 { 00:21:52.954 "name": null, 00:21:52.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.954 "is_configured": false, 00:21:52.954 "data_offset": 0, 00:21:52.954 "data_size": 63488 00:21:52.954 }, 00:21:52.954 { 00:21:52.954 "name": "BaseBdev2", 00:21:52.954 "uuid": "28ad0910-97f9-4f44-a09c-7d3b8e310219", 00:21:52.954 "is_configured": true, 00:21:52.954 "data_offset": 2048, 00:21:52.954 "data_size": 63488 00:21:52.954 } 00:21:52.954 ] 00:21:52.954 }' 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.954 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.523 [2024-11-20 07:19:17.678869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:53.523 [2024-11-20 07:19:17.679167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.523 [2024-11-20 07:19:17.767888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.523 [2024-11-20 07:19:17.767969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:53.523 [2024-11-20 07:19:17.767990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.523 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63139 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63139 ']' 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63139 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:53.782 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.783 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63139 00:21:53.783 killing process with pid 63139 00:21:53.783 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.783 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.783 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63139' 00:21:53.783 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63139 00:21:53.783 [2024-11-20 07:19:17.864427] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:53.783 07:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63139 00:21:53.783 [2024-11-20 07:19:17.879052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.720 07:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:54.720 00:21:54.720 real 0m5.606s 00:21:54.720 user 0m8.415s 00:21:54.720 sys 0m0.876s 00:21:54.720 ************************************ 00:21:54.720 END TEST raid_state_function_test_sb 00:21:54.720 ************************************ 00:21:54.720 07:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.720 07:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.720 07:19:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:21:54.720 07:19:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:54.720 07:19:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.720 07:19:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:54.720 ************************************ 00:21:54.720 START TEST raid_superblock_test 00:21:54.720 ************************************ 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63391 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63391 00:21:54.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63391 ']' 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.720 07:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.979 [2024-11-20 07:19:19.085447] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:54.979 [2024-11-20 07:19:19.085655] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63391 ] 00:21:55.237 [2024-11-20 07:19:19.278701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.237 [2024-11-20 07:19:19.437518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.495 [2024-11-20 07:19:19.657390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.495 [2024-11-20 07:19:19.657449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.064 malloc1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.064 [2024-11-20 07:19:20.167990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:56.064 [2024-11-20 07:19:20.168066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.064 [2024-11-20 07:19:20.168100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:56.064 [2024-11-20 07:19:20.168116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.064 [2024-11-20 07:19:20.171184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.064 [2024-11-20 07:19:20.171393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:56.064 pt1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.064 malloc2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.064 [2024-11-20 07:19:20.224910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.064 [2024-11-20 07:19:20.225011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.064 [2024-11-20 07:19:20.225042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:56.064 [2024-11-20 07:19:20.225071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.064 [2024-11-20 07:19:20.228137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.064 [2024-11-20 07:19:20.228330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.064 pt2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.064 [2024-11-20 07:19:20.237148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:56.064 [2024-11-20 07:19:20.239558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:56.064 [2024-11-20 07:19:20.239832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:56.064 [2024-11-20 07:19:20.239857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:56.064 [2024-11-20 07:19:20.240163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:56.064 [2024-11-20 07:19:20.240491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:56.064 [2024-11-20 07:19:20.240525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:56.064 [2024-11-20 07:19:20.240752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.064 "name": "raid_bdev1", 00:21:56.064 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:56.064 "strip_size_kb": 0, 00:21:56.064 "state": "online", 00:21:56.064 "raid_level": "raid1", 00:21:56.064 "superblock": true, 00:21:56.064 "num_base_bdevs": 2, 00:21:56.064 "num_base_bdevs_discovered": 2, 00:21:56.064 "num_base_bdevs_operational": 2, 00:21:56.064 "base_bdevs_list": [ 00:21:56.064 { 00:21:56.064 "name": "pt1", 00:21:56.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.064 "is_configured": true, 00:21:56.064 "data_offset": 2048, 00:21:56.064 "data_size": 63488 00:21:56.064 }, 00:21:56.064 { 00:21:56.064 "name": "pt2", 00:21:56.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.064 "is_configured": true, 00:21:56.064 "data_offset": 2048, 00:21:56.064 "data_size": 63488 00:21:56.064 } 00:21:56.064 ] 00:21:56.064 }' 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.064 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.632 [2024-11-20 07:19:20.749671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.632 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:56.632 "name": "raid_bdev1", 00:21:56.632 "aliases": [ 00:21:56.632 "70a23fb6-e9de-4207-b2b2-bca2fc311ba1" 00:21:56.632 ], 00:21:56.632 "product_name": "Raid Volume", 00:21:56.632 "block_size": 512, 00:21:56.632 "num_blocks": 63488, 00:21:56.632 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:56.632 "assigned_rate_limits": { 00:21:56.632 "rw_ios_per_sec": 0, 00:21:56.632 "rw_mbytes_per_sec": 0, 00:21:56.632 "r_mbytes_per_sec": 0, 00:21:56.632 "w_mbytes_per_sec": 0 00:21:56.632 }, 00:21:56.632 "claimed": false, 00:21:56.632 "zoned": false, 00:21:56.632 "supported_io_types": { 00:21:56.632 "read": true, 00:21:56.632 "write": true, 00:21:56.632 "unmap": false, 00:21:56.632 "flush": false, 00:21:56.632 "reset": true, 00:21:56.632 "nvme_admin": false, 00:21:56.632 "nvme_io": false, 00:21:56.632 "nvme_io_md": false, 00:21:56.632 "write_zeroes": true, 00:21:56.632 "zcopy": false, 00:21:56.632 "get_zone_info": false, 00:21:56.632 "zone_management": false, 00:21:56.632 "zone_append": false, 00:21:56.632 "compare": false, 00:21:56.632 "compare_and_write": false, 00:21:56.632 "abort": false, 00:21:56.632 "seek_hole": false, 00:21:56.632 "seek_data": false, 00:21:56.632 "copy": false, 00:21:56.632 "nvme_iov_md": false 00:21:56.632 }, 00:21:56.632 "memory_domains": [ 00:21:56.632 { 00:21:56.632 "dma_device_id": "system", 00:21:56.632 "dma_device_type": 1 00:21:56.632 }, 00:21:56.632 { 00:21:56.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.632 "dma_device_type": 2 00:21:56.632 }, 00:21:56.632 { 00:21:56.632 "dma_device_id": "system", 00:21:56.632 "dma_device_type": 1 00:21:56.632 }, 00:21:56.632 { 00:21:56.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.632 "dma_device_type": 2 00:21:56.632 } 00:21:56.632 ], 00:21:56.632 "driver_specific": { 00:21:56.632 "raid": { 00:21:56.632 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:56.632 "strip_size_kb": 0, 00:21:56.632 "state": "online", 00:21:56.632 "raid_level": "raid1", 00:21:56.632 "superblock": true, 00:21:56.632 "num_base_bdevs": 2, 00:21:56.632 "num_base_bdevs_discovered": 2, 00:21:56.632 "num_base_bdevs_operational": 2, 00:21:56.632 "base_bdevs_list": [ 00:21:56.632 { 00:21:56.632 "name": "pt1", 00:21:56.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.633 "is_configured": true, 00:21:56.633 "data_offset": 2048, 00:21:56.633 "data_size": 63488 00:21:56.633 }, 00:21:56.633 { 00:21:56.633 "name": "pt2", 00:21:56.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.633 "is_configured": true, 00:21:56.633 "data_offset": 2048, 00:21:56.633 "data_size": 63488 00:21:56.633 } 00:21:56.633 ] 00:21:56.633 } 00:21:56.633 } 00:21:56.633 }' 00:21:56.633 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.633 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:56.633 pt2' 00:21:56.633 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.891 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.892 07:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:56.892 [2024-11-20 07:19:21.037688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=70a23fb6-e9de-4207-b2b2-bca2fc311ba1 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 70a23fb6-e9de-4207-b2b2-bca2fc311ba1 ']' 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.892 [2024-11-20 07:19:21.085319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.892 [2024-11-20 07:19:21.085349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.892 [2024-11-20 07:19:21.085453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.892 [2024-11-20 07:19:21.085525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.892 [2024-11-20 07:19:21.085546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.892 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.151 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.151 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:57.151 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:57.151 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.152 [2024-11-20 07:19:21.225500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:57.152 [2024-11-20 07:19:21.228748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:57.152 [2024-11-20 07:19:21.228837] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:57.152 [2024-11-20 07:19:21.228950] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:57.152 [2024-11-20 07:19:21.229022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.152 [2024-11-20 07:19:21.229048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:57.152 request: 00:21:57.152 { 00:21:57.152 "name": "raid_bdev1", 00:21:57.152 "raid_level": "raid1", 00:21:57.152 "base_bdevs": [ 00:21:57.152 "malloc1", 00:21:57.152 "malloc2" 00:21:57.152 ], 00:21:57.152 "superblock": false, 00:21:57.152 "method": "bdev_raid_create", 00:21:57.152 "req_id": 1 00:21:57.152 } 00:21:57.152 Got JSON-RPC error response 00:21:57.152 response: 00:21:57.152 { 00:21:57.152 "code": -17, 00:21:57.152 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:57.152 } 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.152 [2024-11-20 07:19:21.301843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:57.152 [2024-11-20 07:19:21.302053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.152 [2024-11-20 07:19:21.302122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:57.152 [2024-11-20 07:19:21.302264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.152 [2024-11-20 07:19:21.305344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.152 [2024-11-20 07:19:21.305530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:57.152 [2024-11-20 07:19:21.305774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:57.152 [2024-11-20 07:19:21.305955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:57.152 pt1 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.152 "name": "raid_bdev1", 00:21:57.152 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:57.152 "strip_size_kb": 0, 00:21:57.152 "state": "configuring", 00:21:57.152 "raid_level": "raid1", 00:21:57.152 "superblock": true, 00:21:57.152 "num_base_bdevs": 2, 00:21:57.152 "num_base_bdevs_discovered": 1, 00:21:57.152 "num_base_bdevs_operational": 2, 00:21:57.152 "base_bdevs_list": [ 00:21:57.152 { 00:21:57.152 "name": "pt1", 00:21:57.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:57.152 "is_configured": true, 00:21:57.152 "data_offset": 2048, 00:21:57.152 "data_size": 63488 00:21:57.152 }, 00:21:57.152 { 00:21:57.152 "name": null, 00:21:57.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.152 "is_configured": false, 00:21:57.152 "data_offset": 2048, 00:21:57.152 "data_size": 63488 00:21:57.152 } 00:21:57.152 ] 00:21:57.152 }' 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.152 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.721 [2024-11-20 07:19:21.838034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:57.721 [2024-11-20 07:19:21.838135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.721 [2024-11-20 07:19:21.838168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:57.721 [2024-11-20 07:19:21.838186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.721 [2024-11-20 07:19:21.838863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.721 [2024-11-20 07:19:21.838902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:57.721 [2024-11-20 07:19:21.839005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:57.721 [2024-11-20 07:19:21.839050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:57.721 [2024-11-20 07:19:21.839206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:57.721 [2024-11-20 07:19:21.839227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:57.721 [2024-11-20 07:19:21.839526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:57.721 [2024-11-20 07:19:21.839768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:57.721 [2024-11-20 07:19:21.839785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:57.721 [2024-11-20 07:19:21.839972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.721 pt2 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.721 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.721 "name": "raid_bdev1", 00:21:57.721 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:57.721 "strip_size_kb": 0, 00:21:57.721 "state": "online", 00:21:57.721 "raid_level": "raid1", 00:21:57.721 "superblock": true, 00:21:57.721 "num_base_bdevs": 2, 00:21:57.721 "num_base_bdevs_discovered": 2, 00:21:57.721 "num_base_bdevs_operational": 2, 00:21:57.721 "base_bdevs_list": [ 00:21:57.721 { 00:21:57.721 "name": "pt1", 00:21:57.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:57.721 "is_configured": true, 00:21:57.721 "data_offset": 2048, 00:21:57.721 "data_size": 63488 00:21:57.721 }, 00:21:57.721 { 00:21:57.721 "name": "pt2", 00:21:57.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.721 "is_configured": true, 00:21:57.721 "data_offset": 2048, 00:21:57.721 "data_size": 63488 00:21:57.721 } 00:21:57.722 ] 00:21:57.722 }' 00:21:57.722 07:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.722 07:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.288 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:58.288 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:58.288 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:58.288 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:58.288 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:58.288 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:58.289 [2024-11-20 07:19:22.382527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:58.289 "name": "raid_bdev1", 00:21:58.289 "aliases": [ 00:21:58.289 "70a23fb6-e9de-4207-b2b2-bca2fc311ba1" 00:21:58.289 ], 00:21:58.289 "product_name": "Raid Volume", 00:21:58.289 "block_size": 512, 00:21:58.289 "num_blocks": 63488, 00:21:58.289 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:58.289 "assigned_rate_limits": { 00:21:58.289 "rw_ios_per_sec": 0, 00:21:58.289 "rw_mbytes_per_sec": 0, 00:21:58.289 "r_mbytes_per_sec": 0, 00:21:58.289 "w_mbytes_per_sec": 0 00:21:58.289 }, 00:21:58.289 "claimed": false, 00:21:58.289 "zoned": false, 00:21:58.289 "supported_io_types": { 00:21:58.289 "read": true, 00:21:58.289 "write": true, 00:21:58.289 "unmap": false, 00:21:58.289 "flush": false, 00:21:58.289 "reset": true, 00:21:58.289 "nvme_admin": false, 00:21:58.289 "nvme_io": false, 00:21:58.289 "nvme_io_md": false, 00:21:58.289 "write_zeroes": true, 00:21:58.289 "zcopy": false, 00:21:58.289 "get_zone_info": false, 00:21:58.289 "zone_management": false, 00:21:58.289 "zone_append": false, 00:21:58.289 "compare": false, 00:21:58.289 "compare_and_write": false, 00:21:58.289 "abort": false, 00:21:58.289 "seek_hole": false, 00:21:58.289 "seek_data": false, 00:21:58.289 "copy": false, 00:21:58.289 "nvme_iov_md": false 00:21:58.289 }, 00:21:58.289 "memory_domains": [ 00:21:58.289 { 00:21:58.289 "dma_device_id": "system", 00:21:58.289 "dma_device_type": 1 00:21:58.289 }, 00:21:58.289 { 00:21:58.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.289 "dma_device_type": 2 00:21:58.289 }, 00:21:58.289 { 00:21:58.289 "dma_device_id": "system", 00:21:58.289 "dma_device_type": 1 00:21:58.289 }, 00:21:58.289 { 00:21:58.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.289 "dma_device_type": 2 00:21:58.289 } 00:21:58.289 ], 00:21:58.289 "driver_specific": { 00:21:58.289 "raid": { 00:21:58.289 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:58.289 "strip_size_kb": 0, 00:21:58.289 "state": "online", 00:21:58.289 "raid_level": "raid1", 00:21:58.289 "superblock": true, 00:21:58.289 "num_base_bdevs": 2, 00:21:58.289 "num_base_bdevs_discovered": 2, 00:21:58.289 "num_base_bdevs_operational": 2, 00:21:58.289 "base_bdevs_list": [ 00:21:58.289 { 00:21:58.289 "name": "pt1", 00:21:58.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:58.289 "is_configured": true, 00:21:58.289 "data_offset": 2048, 00:21:58.289 "data_size": 63488 00:21:58.289 }, 00:21:58.289 { 00:21:58.289 "name": "pt2", 00:21:58.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:58.289 "is_configured": true, 00:21:58.289 "data_offset": 2048, 00:21:58.289 "data_size": 63488 00:21:58.289 } 00:21:58.289 ] 00:21:58.289 } 00:21:58.289 } 00:21:58.289 }' 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:58.289 pt2' 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.289 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.548 [2024-11-20 07:19:22.658509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 70a23fb6-e9de-4207-b2b2-bca2fc311ba1 '!=' 70a23fb6-e9de-4207-b2b2-bca2fc311ba1 ']' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.548 [2024-11-20 07:19:22.710336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.548 "name": "raid_bdev1", 00:21:58.548 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:58.548 "strip_size_kb": 0, 00:21:58.548 "state": "online", 00:21:58.548 "raid_level": "raid1", 00:21:58.548 "superblock": true, 00:21:58.548 "num_base_bdevs": 2, 00:21:58.548 "num_base_bdevs_discovered": 1, 00:21:58.548 "num_base_bdevs_operational": 1, 00:21:58.548 "base_bdevs_list": [ 00:21:58.548 { 00:21:58.548 "name": null, 00:21:58.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.548 "is_configured": false, 00:21:58.548 "data_offset": 0, 00:21:58.548 "data_size": 63488 00:21:58.548 }, 00:21:58.548 { 00:21:58.548 "name": "pt2", 00:21:58.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:58.548 "is_configured": true, 00:21:58.548 "data_offset": 2048, 00:21:58.548 "data_size": 63488 00:21:58.548 } 00:21:58.548 ] 00:21:58.548 }' 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.548 07:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.206 [2024-11-20 07:19:23.222484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.206 [2024-11-20 07:19:23.222694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.206 [2024-11-20 07:19:23.222821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.206 [2024-11-20 07:19:23.222890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.206 [2024-11-20 07:19:23.222910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.206 [2024-11-20 07:19:23.298428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:59.206 [2024-11-20 07:19:23.298665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.206 [2024-11-20 07:19:23.298704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:59.206 [2024-11-20 07:19:23.298724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.206 [2024-11-20 07:19:23.301541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.206 [2024-11-20 07:19:23.301602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:59.206 [2024-11-20 07:19:23.301704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:59.206 [2024-11-20 07:19:23.301765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:59.206 [2024-11-20 07:19:23.301898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:59.206 [2024-11-20 07:19:23.301920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:59.206 [2024-11-20 07:19:23.302205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:59.206 [2024-11-20 07:19:23.302411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:59.206 [2024-11-20 07:19:23.302441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:59.206 [2024-11-20 07:19:23.302700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.206 pt2 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.206 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.207 "name": "raid_bdev1", 00:21:59.207 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:59.207 "strip_size_kb": 0, 00:21:59.207 "state": "online", 00:21:59.207 "raid_level": "raid1", 00:21:59.207 "superblock": true, 00:21:59.207 "num_base_bdevs": 2, 00:21:59.207 "num_base_bdevs_discovered": 1, 00:21:59.207 "num_base_bdevs_operational": 1, 00:21:59.207 "base_bdevs_list": [ 00:21:59.207 { 00:21:59.207 "name": null, 00:21:59.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.207 "is_configured": false, 00:21:59.207 "data_offset": 2048, 00:21:59.207 "data_size": 63488 00:21:59.207 }, 00:21:59.207 { 00:21:59.207 "name": "pt2", 00:21:59.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.207 "is_configured": true, 00:21:59.207 "data_offset": 2048, 00:21:59.207 "data_size": 63488 00:21:59.207 } 00:21:59.207 ] 00:21:59.207 }' 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.207 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.772 [2024-11-20 07:19:23.830779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.772 [2024-11-20 07:19:23.830832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.772 [2024-11-20 07:19:23.830946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.772 [2024-11-20 07:19:23.831038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.772 [2024-11-20 07:19:23.831057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.772 [2024-11-20 07:19:23.894801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:59.772 [2024-11-20 07:19:23.894880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.772 [2024-11-20 07:19:23.894912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:59.772 [2024-11-20 07:19:23.894928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.772 [2024-11-20 07:19:23.898075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.772 [2024-11-20 07:19:23.898130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:59.772 [2024-11-20 07:19:23.898256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:59.772 [2024-11-20 07:19:23.898326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:59.772 [2024-11-20 07:19:23.898528] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:59.772 [2024-11-20 07:19:23.898558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.772 [2024-11-20 07:19:23.898606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:59.772 [2024-11-20 07:19:23.898714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:59.772 [2024-11-20 07:19:23.898849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:59.772 [2024-11-20 07:19:23.898869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:59.772 [2024-11-20 07:19:23.899213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:59.772 [2024-11-20 07:19:23.899421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:59.772 [2024-11-20 07:19:23.899446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:59.772 pt1 00:21:59.772 [2024-11-20 07:19:23.899749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.772 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.772 "name": "raid_bdev1", 00:21:59.772 "uuid": "70a23fb6-e9de-4207-b2b2-bca2fc311ba1", 00:21:59.772 "strip_size_kb": 0, 00:21:59.772 "state": "online", 00:21:59.772 "raid_level": "raid1", 00:21:59.772 "superblock": true, 00:21:59.772 "num_base_bdevs": 2, 00:21:59.772 "num_base_bdevs_discovered": 1, 00:21:59.772 "num_base_bdevs_operational": 1, 00:21:59.772 "base_bdevs_list": [ 00:21:59.772 { 00:21:59.772 "name": null, 00:21:59.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.772 "is_configured": false, 00:21:59.772 "data_offset": 2048, 00:21:59.772 "data_size": 63488 00:21:59.772 }, 00:21:59.772 { 00:21:59.772 "name": "pt2", 00:21:59.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.773 "is_configured": true, 00:21:59.773 "data_offset": 2048, 00:21:59.773 "data_size": 63488 00:21:59.773 } 00:21:59.773 ] 00:21:59.773 }' 00:21:59.773 07:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.773 07:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:00.338 [2024-11-20 07:19:24.495411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 70a23fb6-e9de-4207-b2b2-bca2fc311ba1 '!=' 70a23fb6-e9de-4207-b2b2-bca2fc311ba1 ']' 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63391 00:22:00.338 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63391 ']' 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63391 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63391 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.339 killing process with pid 63391 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63391' 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63391 00:22:00.339 [2024-11-20 07:19:24.564846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.339 07:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63391 00:22:00.339 [2024-11-20 07:19:24.564958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.339 [2024-11-20 07:19:24.565025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.339 [2024-11-20 07:19:24.565054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:00.597 [2024-11-20 07:19:24.740836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.534 07:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:01.534 00:22:01.534 real 0m6.776s 00:22:01.534 user 0m10.762s 00:22:01.534 sys 0m0.988s 00:22:01.534 07:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.534 07:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.534 ************************************ 00:22:01.534 END TEST raid_superblock_test 00:22:01.534 ************************************ 00:22:01.534 07:19:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:22:01.534 07:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:01.534 07:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.534 07:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:01.534 ************************************ 00:22:01.534 START TEST raid_read_error_test 00:22:01.534 ************************************ 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.T0aqxy7VmL 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63727 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63727 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63727 ']' 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.534 07:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.793 [2024-11-20 07:19:25.907779] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:01.793 [2024-11-20 07:19:25.907925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63727 ] 00:22:01.793 [2024-11-20 07:19:26.079896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.052 [2024-11-20 07:19:26.207383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.310 [2024-11-20 07:19:26.415651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.310 [2024-11-20 07:19:26.415707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 BaseBdev1_malloc 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 true 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 [2024-11-20 07:19:26.989684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:02.877 [2024-11-20 07:19:26.989768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.877 [2024-11-20 07:19:26.989797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:02.877 [2024-11-20 07:19:26.989815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.877 [2024-11-20 07:19:26.992687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.877 [2024-11-20 07:19:26.992749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:02.877 BaseBdev1 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 BaseBdev2_malloc 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 true 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 [2024-11-20 07:19:27.049575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:02.877 [2024-11-20 07:19:27.049660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.877 [2024-11-20 07:19:27.049686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:02.877 [2024-11-20 07:19:27.049705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.877 [2024-11-20 07:19:27.052506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.877 [2024-11-20 07:19:27.052558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:02.877 BaseBdev2 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 [2024-11-20 07:19:27.057658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.877 [2024-11-20 07:19:27.060107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.877 [2024-11-20 07:19:27.060369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:02.877 [2024-11-20 07:19:27.060403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:02.877 [2024-11-20 07:19:27.060735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:02.877 [2024-11-20 07:19:27.060982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:02.877 [2024-11-20 07:19:27.061010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:02.877 [2024-11-20 07:19:27.061199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.877 "name": "raid_bdev1", 00:22:02.877 "uuid": "a91b3af6-e1f1-4902-b625-26aab25bf74c", 00:22:02.877 "strip_size_kb": 0, 00:22:02.877 "state": "online", 00:22:02.877 "raid_level": "raid1", 00:22:02.877 "superblock": true, 00:22:02.877 "num_base_bdevs": 2, 00:22:02.877 "num_base_bdevs_discovered": 2, 00:22:02.877 "num_base_bdevs_operational": 2, 00:22:02.877 "base_bdevs_list": [ 00:22:02.877 { 00:22:02.877 "name": "BaseBdev1", 00:22:02.877 "uuid": "30dc87d0-5952-5bd2-b088-3d9583c4fbcf", 00:22:02.877 "is_configured": true, 00:22:02.877 "data_offset": 2048, 00:22:02.877 "data_size": 63488 00:22:02.877 }, 00:22:02.877 { 00:22:02.877 "name": "BaseBdev2", 00:22:02.877 "uuid": "41d7eb04-77e8-5c9e-95d5-f2b1a193162f", 00:22:02.877 "is_configured": true, 00:22:02.877 "data_offset": 2048, 00:22:02.877 "data_size": 63488 00:22:02.877 } 00:22:02.877 ] 00:22:02.877 }' 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.877 07:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.444 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:03.444 07:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:03.444 [2024-11-20 07:19:27.715086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.430 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.431 "name": "raid_bdev1", 00:22:04.431 "uuid": "a91b3af6-e1f1-4902-b625-26aab25bf74c", 00:22:04.431 "strip_size_kb": 0, 00:22:04.431 "state": "online", 00:22:04.431 "raid_level": "raid1", 00:22:04.431 "superblock": true, 00:22:04.431 "num_base_bdevs": 2, 00:22:04.431 "num_base_bdevs_discovered": 2, 00:22:04.431 "num_base_bdevs_operational": 2, 00:22:04.431 "base_bdevs_list": [ 00:22:04.431 { 00:22:04.431 "name": "BaseBdev1", 00:22:04.431 "uuid": "30dc87d0-5952-5bd2-b088-3d9583c4fbcf", 00:22:04.431 "is_configured": true, 00:22:04.431 "data_offset": 2048, 00:22:04.431 "data_size": 63488 00:22:04.431 }, 00:22:04.431 { 00:22:04.431 "name": "BaseBdev2", 00:22:04.431 "uuid": "41d7eb04-77e8-5c9e-95d5-f2b1a193162f", 00:22:04.431 "is_configured": true, 00:22:04.431 "data_offset": 2048, 00:22:04.431 "data_size": 63488 00:22:04.431 } 00:22:04.431 ] 00:22:04.431 }' 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.431 07:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.999 [2024-11-20 07:19:29.133811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.999 [2024-11-20 07:19:29.133863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.999 [2024-11-20 07:19:29.137235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.999 [2024-11-20 07:19:29.137304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.999 [2024-11-20 07:19:29.137413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.999 [2024-11-20 07:19:29.137457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:04.999 { 00:22:04.999 "results": [ 00:22:04.999 { 00:22:04.999 "job": "raid_bdev1", 00:22:04.999 "core_mask": "0x1", 00:22:04.999 "workload": "randrw", 00:22:04.999 "percentage": 50, 00:22:04.999 "status": "finished", 00:22:04.999 "queue_depth": 1, 00:22:04.999 "io_size": 131072, 00:22:04.999 "runtime": 1.416528, 00:22:04.999 "iops": 12245.433906001152, 00:22:04.999 "mibps": 1530.679238250144, 00:22:04.999 "io_failed": 0, 00:22:04.999 "io_timeout": 0, 00:22:04.999 "avg_latency_us": 77.58543106610902, 00:22:04.999 "min_latency_us": 38.167272727272724, 00:22:04.999 "max_latency_us": 1995.8690909090908 00:22:04.999 } 00:22:04.999 ], 00:22:04.999 "core_count": 1 00:22:04.999 } 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63727 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63727 ']' 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63727 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63727 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.999 killing process with pid 63727 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63727' 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63727 00:22:04.999 [2024-11-20 07:19:29.178277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:04.999 07:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63727 00:22:05.256 [2024-11-20 07:19:29.300850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.T0aqxy7VmL 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:06.193 00:22:06.193 real 0m4.560s 00:22:06.193 user 0m5.731s 00:22:06.193 sys 0m0.608s 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.193 07:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.193 ************************************ 00:22:06.193 END TEST raid_read_error_test 00:22:06.193 ************************************ 00:22:06.193 07:19:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:22:06.193 07:19:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:06.193 07:19:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.193 07:19:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.193 ************************************ 00:22:06.193 START TEST raid_write_error_test 00:22:06.194 ************************************ 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.h1EN3D4Y8m 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63867 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63867 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63867 ']' 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.194 07:19:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.452 [2024-11-20 07:19:30.556683] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:06.452 [2024-11-20 07:19:30.556857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63867 ] 00:22:06.710 [2024-11-20 07:19:30.750104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.710 [2024-11-20 07:19:30.905243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.969 [2024-11-20 07:19:31.151819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.969 [2024-11-20 07:19:31.151897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.535 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.535 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:07.535 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:07.535 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:07.535 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 BaseBdev1_malloc 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 true 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 [2024-11-20 07:19:31.577599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:07.536 [2024-11-20 07:19:31.577667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.536 [2024-11-20 07:19:31.577696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:07.536 [2024-11-20 07:19:31.577715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.536 [2024-11-20 07:19:31.580516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.536 [2024-11-20 07:19:31.580567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:07.536 BaseBdev1 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 BaseBdev2_malloc 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 true 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 [2024-11-20 07:19:31.638765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:07.536 [2024-11-20 07:19:31.638835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.536 [2024-11-20 07:19:31.638863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:07.536 [2024-11-20 07:19:31.638882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.536 [2024-11-20 07:19:31.641653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.536 [2024-11-20 07:19:31.641706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:07.536 BaseBdev2 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 [2024-11-20 07:19:31.646839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.536 [2024-11-20 07:19:31.649409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:07.536 [2024-11-20 07:19:31.649694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:07.536 [2024-11-20 07:19:31.649729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:07.536 [2024-11-20 07:19:31.650045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:07.536 [2024-11-20 07:19:31.650297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:07.536 [2024-11-20 07:19:31.650324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:07.536 [2024-11-20 07:19:31.650516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.536 "name": "raid_bdev1", 00:22:07.536 "uuid": "b5928afc-2fcf-45d2-87d1-1622a2833d97", 00:22:07.536 "strip_size_kb": 0, 00:22:07.536 "state": "online", 00:22:07.536 "raid_level": "raid1", 00:22:07.536 "superblock": true, 00:22:07.536 "num_base_bdevs": 2, 00:22:07.536 "num_base_bdevs_discovered": 2, 00:22:07.536 "num_base_bdevs_operational": 2, 00:22:07.536 "base_bdevs_list": [ 00:22:07.536 { 00:22:07.536 "name": "BaseBdev1", 00:22:07.536 "uuid": "25c4a3dd-ad25-5cfe-8958-82170738ddb4", 00:22:07.536 "is_configured": true, 00:22:07.536 "data_offset": 2048, 00:22:07.536 "data_size": 63488 00:22:07.536 }, 00:22:07.536 { 00:22:07.536 "name": "BaseBdev2", 00:22:07.536 "uuid": "00edec90-e84c-5b7c-8565-b9a54d7bec30", 00:22:07.536 "is_configured": true, 00:22:07.536 "data_offset": 2048, 00:22:07.536 "data_size": 63488 00:22:07.536 } 00:22:07.536 ] 00:22:07.536 }' 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.536 07:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.101 07:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:08.101 07:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:08.101 [2024-11-20 07:19:32.320425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.062 [2024-11-20 07:19:33.198138] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:09.062 [2024-11-20 07:19:33.198241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.062 [2024-11-20 07:19:33.198476] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.062 "name": "raid_bdev1", 00:22:09.062 "uuid": "b5928afc-2fcf-45d2-87d1-1622a2833d97", 00:22:09.062 "strip_size_kb": 0, 00:22:09.062 "state": "online", 00:22:09.062 "raid_level": "raid1", 00:22:09.062 "superblock": true, 00:22:09.062 "num_base_bdevs": 2, 00:22:09.062 "num_base_bdevs_discovered": 1, 00:22:09.062 "num_base_bdevs_operational": 1, 00:22:09.062 "base_bdevs_list": [ 00:22:09.062 { 00:22:09.062 "name": null, 00:22:09.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.062 "is_configured": false, 00:22:09.062 "data_offset": 0, 00:22:09.062 "data_size": 63488 00:22:09.062 }, 00:22:09.062 { 00:22:09.062 "name": "BaseBdev2", 00:22:09.062 "uuid": "00edec90-e84c-5b7c-8565-b9a54d7bec30", 00:22:09.062 "is_configured": true, 00:22:09.062 "data_offset": 2048, 00:22:09.062 "data_size": 63488 00:22:09.062 } 00:22:09.062 ] 00:22:09.062 }' 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.062 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.627 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:09.627 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.627 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.627 [2024-11-20 07:19:33.729751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.627 [2024-11-20 07:19:33.729789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.627 [2024-11-20 07:19:33.733212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.627 [2024-11-20 07:19:33.733329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.627 [2024-11-20 07:19:33.733458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.627 [2024-11-20 07:19:33.733490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:09.627 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.627 { 00:22:09.627 "results": [ 00:22:09.627 { 00:22:09.627 "job": "raid_bdev1", 00:22:09.627 "core_mask": "0x1", 00:22:09.627 "workload": "randrw", 00:22:09.627 "percentage": 50, 00:22:09.627 "status": "finished", 00:22:09.627 "queue_depth": 1, 00:22:09.627 "io_size": 131072, 00:22:09.627 "runtime": 1.406813, 00:22:09.627 "iops": 13790.745465104459, 00:22:09.628 "mibps": 1723.8431831380574, 00:22:09.628 "io_failed": 0, 00:22:09.628 "io_timeout": 0, 00:22:09.628 "avg_latency_us": 68.25212795966469, 00:22:09.628 "min_latency_us": 38.167272727272724, 00:22:09.628 "max_latency_us": 1765.0036363636364 00:22:09.628 } 00:22:09.628 ], 00:22:09.628 "core_count": 1 00:22:09.628 } 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63867 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63867 ']' 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63867 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63867 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.628 killing process with pid 63867 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63867' 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63867 00:22:09.628 [2024-11-20 07:19:33.773663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.628 07:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63867 00:22:09.628 [2024-11-20 07:19:33.891782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.h1EN3D4Y8m 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:11.003 00:22:11.003 real 0m4.525s 00:22:11.003 user 0m5.694s 00:22:11.003 sys 0m0.568s 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.003 07:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 ************************************ 00:22:11.003 END TEST raid_write_error_test 00:22:11.003 ************************************ 00:22:11.003 07:19:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:22:11.003 07:19:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:11.003 07:19:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:22:11.003 07:19:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:11.003 07:19:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.003 07:19:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 ************************************ 00:22:11.003 START TEST raid_state_function_test 00:22:11.003 ************************************ 00:22:11.003 07:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:22:11.003 07:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:22:11.003 07:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:11.003 07:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:11.003 07:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64016 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64016' 00:22:11.003 Process raid pid: 64016 00:22:11.003 07:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64016 00:22:11.004 07:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64016 ']' 00:22:11.004 07:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.004 07:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.004 07:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.004 07:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.004 07:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.004 [2024-11-20 07:19:35.119730] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:11.004 [2024-11-20 07:19:35.119913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.262 [2024-11-20 07:19:35.308742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.262 [2024-11-20 07:19:35.434470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.520 [2024-11-20 07:19:35.640772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:11.520 [2024-11-20 07:19:35.640824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.088 [2024-11-20 07:19:36.147208] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:12.088 [2024-11-20 07:19:36.147289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:12.088 [2024-11-20 07:19:36.147307] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.088 [2024-11-20 07:19:36.147323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.088 [2024-11-20 07:19:36.147333] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.088 [2024-11-20 07:19:36.147346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.088 "name": "Existed_Raid", 00:22:12.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.088 "strip_size_kb": 64, 00:22:12.088 "state": "configuring", 00:22:12.088 "raid_level": "raid0", 00:22:12.088 "superblock": false, 00:22:12.088 "num_base_bdevs": 3, 00:22:12.088 "num_base_bdevs_discovered": 0, 00:22:12.088 "num_base_bdevs_operational": 3, 00:22:12.088 "base_bdevs_list": [ 00:22:12.088 { 00:22:12.088 "name": "BaseBdev1", 00:22:12.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.088 "is_configured": false, 00:22:12.088 "data_offset": 0, 00:22:12.088 "data_size": 0 00:22:12.088 }, 00:22:12.088 { 00:22:12.088 "name": "BaseBdev2", 00:22:12.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.088 "is_configured": false, 00:22:12.088 "data_offset": 0, 00:22:12.088 "data_size": 0 00:22:12.088 }, 00:22:12.088 { 00:22:12.088 "name": "BaseBdev3", 00:22:12.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.088 "is_configured": false, 00:22:12.088 "data_offset": 0, 00:22:12.088 "data_size": 0 00:22:12.088 } 00:22:12.088 ] 00:22:12.088 }' 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.088 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.654 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:12.654 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.654 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.654 [2024-11-20 07:19:36.719453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.655 [2024-11-20 07:19:36.719521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.655 [2024-11-20 07:19:36.727350] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:12.655 [2024-11-20 07:19:36.727416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:12.655 [2024-11-20 07:19:36.727431] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.655 [2024-11-20 07:19:36.727448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.655 [2024-11-20 07:19:36.727458] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.655 [2024-11-20 07:19:36.727472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.655 [2024-11-20 07:19:36.774207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.655 BaseBdev1 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.655 [ 00:22:12.655 { 00:22:12.655 "name": "BaseBdev1", 00:22:12.655 "aliases": [ 00:22:12.655 "295bb0a4-243c-4ec6-b554-5d3066e1b20d" 00:22:12.655 ], 00:22:12.655 "product_name": "Malloc disk", 00:22:12.655 "block_size": 512, 00:22:12.655 "num_blocks": 65536, 00:22:12.655 "uuid": "295bb0a4-243c-4ec6-b554-5d3066e1b20d", 00:22:12.655 "assigned_rate_limits": { 00:22:12.655 "rw_ios_per_sec": 0, 00:22:12.655 "rw_mbytes_per_sec": 0, 00:22:12.655 "r_mbytes_per_sec": 0, 00:22:12.655 "w_mbytes_per_sec": 0 00:22:12.655 }, 00:22:12.655 "claimed": true, 00:22:12.655 "claim_type": "exclusive_write", 00:22:12.655 "zoned": false, 00:22:12.655 "supported_io_types": { 00:22:12.655 "read": true, 00:22:12.655 "write": true, 00:22:12.655 "unmap": true, 00:22:12.655 "flush": true, 00:22:12.655 "reset": true, 00:22:12.655 "nvme_admin": false, 00:22:12.655 "nvme_io": false, 00:22:12.655 "nvme_io_md": false, 00:22:12.655 "write_zeroes": true, 00:22:12.655 "zcopy": true, 00:22:12.655 "get_zone_info": false, 00:22:12.655 "zone_management": false, 00:22:12.655 "zone_append": false, 00:22:12.655 "compare": false, 00:22:12.655 "compare_and_write": false, 00:22:12.655 "abort": true, 00:22:12.655 "seek_hole": false, 00:22:12.655 "seek_data": false, 00:22:12.655 "copy": true, 00:22:12.655 "nvme_iov_md": false 00:22:12.655 }, 00:22:12.655 "memory_domains": [ 00:22:12.655 { 00:22:12.655 "dma_device_id": "system", 00:22:12.655 "dma_device_type": 1 00:22:12.655 }, 00:22:12.655 { 00:22:12.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.655 "dma_device_type": 2 00:22:12.655 } 00:22:12.655 ], 00:22:12.655 "driver_specific": {} 00:22:12.655 } 00:22:12.655 ] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.655 "name": "Existed_Raid", 00:22:12.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.655 "strip_size_kb": 64, 00:22:12.655 "state": "configuring", 00:22:12.655 "raid_level": "raid0", 00:22:12.655 "superblock": false, 00:22:12.655 "num_base_bdevs": 3, 00:22:12.655 "num_base_bdevs_discovered": 1, 00:22:12.655 "num_base_bdevs_operational": 3, 00:22:12.655 "base_bdevs_list": [ 00:22:12.655 { 00:22:12.655 "name": "BaseBdev1", 00:22:12.655 "uuid": "295bb0a4-243c-4ec6-b554-5d3066e1b20d", 00:22:12.655 "is_configured": true, 00:22:12.655 "data_offset": 0, 00:22:12.655 "data_size": 65536 00:22:12.655 }, 00:22:12.655 { 00:22:12.655 "name": "BaseBdev2", 00:22:12.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.655 "is_configured": false, 00:22:12.655 "data_offset": 0, 00:22:12.655 "data_size": 0 00:22:12.655 }, 00:22:12.655 { 00:22:12.655 "name": "BaseBdev3", 00:22:12.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.655 "is_configured": false, 00:22:12.655 "data_offset": 0, 00:22:12.655 "data_size": 0 00:22:12.655 } 00:22:12.655 ] 00:22:12.655 }' 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.655 07:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 [2024-11-20 07:19:37.346468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:13.277 [2024-11-20 07:19:37.346724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 [2024-11-20 07:19:37.354550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:13.277 [2024-11-20 07:19:37.357188] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:13.277 [2024-11-20 07:19:37.357242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:13.277 [2024-11-20 07:19:37.357275] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:13.277 [2024-11-20 07:19:37.357307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.277 "name": "Existed_Raid", 00:22:13.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.277 "strip_size_kb": 64, 00:22:13.277 "state": "configuring", 00:22:13.277 "raid_level": "raid0", 00:22:13.277 "superblock": false, 00:22:13.277 "num_base_bdevs": 3, 00:22:13.277 "num_base_bdevs_discovered": 1, 00:22:13.277 "num_base_bdevs_operational": 3, 00:22:13.277 "base_bdevs_list": [ 00:22:13.277 { 00:22:13.277 "name": "BaseBdev1", 00:22:13.277 "uuid": "295bb0a4-243c-4ec6-b554-5d3066e1b20d", 00:22:13.277 "is_configured": true, 00:22:13.277 "data_offset": 0, 00:22:13.277 "data_size": 65536 00:22:13.277 }, 00:22:13.277 { 00:22:13.277 "name": "BaseBdev2", 00:22:13.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.277 "is_configured": false, 00:22:13.277 "data_offset": 0, 00:22:13.277 "data_size": 0 00:22:13.277 }, 00:22:13.277 { 00:22:13.277 "name": "BaseBdev3", 00:22:13.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.277 "is_configured": false, 00:22:13.277 "data_offset": 0, 00:22:13.277 "data_size": 0 00:22:13.277 } 00:22:13.277 ] 00:22:13.277 }' 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.277 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.845 [2024-11-20 07:19:37.954978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.845 BaseBdev2 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.845 [ 00:22:13.845 { 00:22:13.845 "name": "BaseBdev2", 00:22:13.845 "aliases": [ 00:22:13.845 "288c7dbc-f6e7-4c4e-8329-7d1ac633ba86" 00:22:13.845 ], 00:22:13.845 "product_name": "Malloc disk", 00:22:13.845 "block_size": 512, 00:22:13.845 "num_blocks": 65536, 00:22:13.845 "uuid": "288c7dbc-f6e7-4c4e-8329-7d1ac633ba86", 00:22:13.845 "assigned_rate_limits": { 00:22:13.845 "rw_ios_per_sec": 0, 00:22:13.845 "rw_mbytes_per_sec": 0, 00:22:13.845 "r_mbytes_per_sec": 0, 00:22:13.845 "w_mbytes_per_sec": 0 00:22:13.845 }, 00:22:13.845 "claimed": true, 00:22:13.845 "claim_type": "exclusive_write", 00:22:13.845 "zoned": false, 00:22:13.845 "supported_io_types": { 00:22:13.845 "read": true, 00:22:13.845 "write": true, 00:22:13.845 "unmap": true, 00:22:13.845 "flush": true, 00:22:13.845 "reset": true, 00:22:13.845 "nvme_admin": false, 00:22:13.845 "nvme_io": false, 00:22:13.845 "nvme_io_md": false, 00:22:13.845 "write_zeroes": true, 00:22:13.845 "zcopy": true, 00:22:13.845 "get_zone_info": false, 00:22:13.845 "zone_management": false, 00:22:13.845 "zone_append": false, 00:22:13.845 "compare": false, 00:22:13.845 "compare_and_write": false, 00:22:13.845 "abort": true, 00:22:13.845 "seek_hole": false, 00:22:13.845 "seek_data": false, 00:22:13.845 "copy": true, 00:22:13.845 "nvme_iov_md": false 00:22:13.845 }, 00:22:13.845 "memory_domains": [ 00:22:13.845 { 00:22:13.845 "dma_device_id": "system", 00:22:13.845 "dma_device_type": 1 00:22:13.845 }, 00:22:13.845 { 00:22:13.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.845 "dma_device_type": 2 00:22:13.845 } 00:22:13.845 ], 00:22:13.845 "driver_specific": {} 00:22:13.845 } 00:22:13.845 ] 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.845 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.846 07:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.846 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.846 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.846 "name": "Existed_Raid", 00:22:13.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.846 "strip_size_kb": 64, 00:22:13.846 "state": "configuring", 00:22:13.846 "raid_level": "raid0", 00:22:13.846 "superblock": false, 00:22:13.846 "num_base_bdevs": 3, 00:22:13.846 "num_base_bdevs_discovered": 2, 00:22:13.846 "num_base_bdevs_operational": 3, 00:22:13.846 "base_bdevs_list": [ 00:22:13.846 { 00:22:13.846 "name": "BaseBdev1", 00:22:13.846 "uuid": "295bb0a4-243c-4ec6-b554-5d3066e1b20d", 00:22:13.846 "is_configured": true, 00:22:13.846 "data_offset": 0, 00:22:13.846 "data_size": 65536 00:22:13.846 }, 00:22:13.846 { 00:22:13.846 "name": "BaseBdev2", 00:22:13.846 "uuid": "288c7dbc-f6e7-4c4e-8329-7d1ac633ba86", 00:22:13.846 "is_configured": true, 00:22:13.846 "data_offset": 0, 00:22:13.846 "data_size": 65536 00:22:13.846 }, 00:22:13.846 { 00:22:13.846 "name": "BaseBdev3", 00:22:13.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.846 "is_configured": false, 00:22:13.846 "data_offset": 0, 00:22:13.846 "data_size": 0 00:22:13.846 } 00:22:13.846 ] 00:22:13.846 }' 00:22:13.846 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.846 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.413 [2024-11-20 07:19:38.598624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:14.413 [2024-11-20 07:19:38.598732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:14.413 [2024-11-20 07:19:38.598755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:14.413 [2024-11-20 07:19:38.599136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:14.413 [2024-11-20 07:19:38.599359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:14.413 [2024-11-20 07:19:38.599384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:14.413 [2024-11-20 07:19:38.599752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.413 BaseBdev3 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.413 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.413 [ 00:22:14.413 { 00:22:14.413 "name": "BaseBdev3", 00:22:14.413 "aliases": [ 00:22:14.413 "440fec82-d894-42bf-96a9-2c00be21ed2e" 00:22:14.413 ], 00:22:14.413 "product_name": "Malloc disk", 00:22:14.413 "block_size": 512, 00:22:14.413 "num_blocks": 65536, 00:22:14.413 "uuid": "440fec82-d894-42bf-96a9-2c00be21ed2e", 00:22:14.413 "assigned_rate_limits": { 00:22:14.413 "rw_ios_per_sec": 0, 00:22:14.413 "rw_mbytes_per_sec": 0, 00:22:14.413 "r_mbytes_per_sec": 0, 00:22:14.413 "w_mbytes_per_sec": 0 00:22:14.413 }, 00:22:14.413 "claimed": true, 00:22:14.413 "claim_type": "exclusive_write", 00:22:14.413 "zoned": false, 00:22:14.413 "supported_io_types": { 00:22:14.413 "read": true, 00:22:14.413 "write": true, 00:22:14.413 "unmap": true, 00:22:14.413 "flush": true, 00:22:14.413 "reset": true, 00:22:14.413 "nvme_admin": false, 00:22:14.413 "nvme_io": false, 00:22:14.413 "nvme_io_md": false, 00:22:14.413 "write_zeroes": true, 00:22:14.413 "zcopy": true, 00:22:14.413 "get_zone_info": false, 00:22:14.413 "zone_management": false, 00:22:14.413 "zone_append": false, 00:22:14.413 "compare": false, 00:22:14.413 "compare_and_write": false, 00:22:14.413 "abort": true, 00:22:14.413 "seek_hole": false, 00:22:14.413 "seek_data": false, 00:22:14.413 "copy": true, 00:22:14.413 "nvme_iov_md": false 00:22:14.413 }, 00:22:14.413 "memory_domains": [ 00:22:14.413 { 00:22:14.414 "dma_device_id": "system", 00:22:14.414 "dma_device_type": 1 00:22:14.414 }, 00:22:14.414 { 00:22:14.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.414 "dma_device_type": 2 00:22:14.414 } 00:22:14.414 ], 00:22:14.414 "driver_specific": {} 00:22:14.414 } 00:22:14.414 ] 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.414 "name": "Existed_Raid", 00:22:14.414 "uuid": "7d32e288-9521-45f2-a694-762ba0333b43", 00:22:14.414 "strip_size_kb": 64, 00:22:14.414 "state": "online", 00:22:14.414 "raid_level": "raid0", 00:22:14.414 "superblock": false, 00:22:14.414 "num_base_bdevs": 3, 00:22:14.414 "num_base_bdevs_discovered": 3, 00:22:14.414 "num_base_bdevs_operational": 3, 00:22:14.414 "base_bdevs_list": [ 00:22:14.414 { 00:22:14.414 "name": "BaseBdev1", 00:22:14.414 "uuid": "295bb0a4-243c-4ec6-b554-5d3066e1b20d", 00:22:14.414 "is_configured": true, 00:22:14.414 "data_offset": 0, 00:22:14.414 "data_size": 65536 00:22:14.414 }, 00:22:14.414 { 00:22:14.414 "name": "BaseBdev2", 00:22:14.414 "uuid": "288c7dbc-f6e7-4c4e-8329-7d1ac633ba86", 00:22:14.414 "is_configured": true, 00:22:14.414 "data_offset": 0, 00:22:14.414 "data_size": 65536 00:22:14.414 }, 00:22:14.414 { 00:22:14.414 "name": "BaseBdev3", 00:22:14.414 "uuid": "440fec82-d894-42bf-96a9-2c00be21ed2e", 00:22:14.414 "is_configured": true, 00:22:14.414 "data_offset": 0, 00:22:14.414 "data_size": 65536 00:22:14.414 } 00:22:14.414 ] 00:22:14.414 }' 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.414 07:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.982 [2024-11-20 07:19:39.151221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.982 "name": "Existed_Raid", 00:22:14.982 "aliases": [ 00:22:14.982 "7d32e288-9521-45f2-a694-762ba0333b43" 00:22:14.982 ], 00:22:14.982 "product_name": "Raid Volume", 00:22:14.982 "block_size": 512, 00:22:14.982 "num_blocks": 196608, 00:22:14.982 "uuid": "7d32e288-9521-45f2-a694-762ba0333b43", 00:22:14.982 "assigned_rate_limits": { 00:22:14.982 "rw_ios_per_sec": 0, 00:22:14.982 "rw_mbytes_per_sec": 0, 00:22:14.982 "r_mbytes_per_sec": 0, 00:22:14.982 "w_mbytes_per_sec": 0 00:22:14.982 }, 00:22:14.982 "claimed": false, 00:22:14.982 "zoned": false, 00:22:14.982 "supported_io_types": { 00:22:14.982 "read": true, 00:22:14.982 "write": true, 00:22:14.982 "unmap": true, 00:22:14.982 "flush": true, 00:22:14.982 "reset": true, 00:22:14.982 "nvme_admin": false, 00:22:14.982 "nvme_io": false, 00:22:14.982 "nvme_io_md": false, 00:22:14.982 "write_zeroes": true, 00:22:14.982 "zcopy": false, 00:22:14.982 "get_zone_info": false, 00:22:14.982 "zone_management": false, 00:22:14.982 "zone_append": false, 00:22:14.982 "compare": false, 00:22:14.982 "compare_and_write": false, 00:22:14.982 "abort": false, 00:22:14.982 "seek_hole": false, 00:22:14.982 "seek_data": false, 00:22:14.982 "copy": false, 00:22:14.982 "nvme_iov_md": false 00:22:14.982 }, 00:22:14.982 "memory_domains": [ 00:22:14.982 { 00:22:14.982 "dma_device_id": "system", 00:22:14.982 "dma_device_type": 1 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.982 "dma_device_type": 2 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "dma_device_id": "system", 00:22:14.982 "dma_device_type": 1 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.982 "dma_device_type": 2 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "dma_device_id": "system", 00:22:14.982 "dma_device_type": 1 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.982 "dma_device_type": 2 00:22:14.982 } 00:22:14.982 ], 00:22:14.982 "driver_specific": { 00:22:14.982 "raid": { 00:22:14.982 "uuid": "7d32e288-9521-45f2-a694-762ba0333b43", 00:22:14.982 "strip_size_kb": 64, 00:22:14.982 "state": "online", 00:22:14.982 "raid_level": "raid0", 00:22:14.982 "superblock": false, 00:22:14.982 "num_base_bdevs": 3, 00:22:14.982 "num_base_bdevs_discovered": 3, 00:22:14.982 "num_base_bdevs_operational": 3, 00:22:14.982 "base_bdevs_list": [ 00:22:14.982 { 00:22:14.982 "name": "BaseBdev1", 00:22:14.982 "uuid": "295bb0a4-243c-4ec6-b554-5d3066e1b20d", 00:22:14.982 "is_configured": true, 00:22:14.982 "data_offset": 0, 00:22:14.982 "data_size": 65536 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "name": "BaseBdev2", 00:22:14.982 "uuid": "288c7dbc-f6e7-4c4e-8329-7d1ac633ba86", 00:22:14.982 "is_configured": true, 00:22:14.982 "data_offset": 0, 00:22:14.982 "data_size": 65536 00:22:14.982 }, 00:22:14.982 { 00:22:14.982 "name": "BaseBdev3", 00:22:14.982 "uuid": "440fec82-d894-42bf-96a9-2c00be21ed2e", 00:22:14.982 "is_configured": true, 00:22:14.982 "data_offset": 0, 00:22:14.982 "data_size": 65536 00:22:14.982 } 00:22:14.982 ] 00:22:14.982 } 00:22:14.982 } 00:22:14.982 }' 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:14.982 BaseBdev2 00:22:14.982 BaseBdev3' 00:22:14.982 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.241 [2024-11-20 07:19:39.439001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:15.241 [2024-11-20 07:19:39.439050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.241 [2024-11-20 07:19:39.439118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.241 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.500 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.500 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.500 "name": "Existed_Raid", 00:22:15.500 "uuid": "7d32e288-9521-45f2-a694-762ba0333b43", 00:22:15.500 "strip_size_kb": 64, 00:22:15.500 "state": "offline", 00:22:15.500 "raid_level": "raid0", 00:22:15.500 "superblock": false, 00:22:15.500 "num_base_bdevs": 3, 00:22:15.500 "num_base_bdevs_discovered": 2, 00:22:15.500 "num_base_bdevs_operational": 2, 00:22:15.500 "base_bdevs_list": [ 00:22:15.500 { 00:22:15.500 "name": null, 00:22:15.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.500 "is_configured": false, 00:22:15.500 "data_offset": 0, 00:22:15.500 "data_size": 65536 00:22:15.500 }, 00:22:15.500 { 00:22:15.500 "name": "BaseBdev2", 00:22:15.500 "uuid": "288c7dbc-f6e7-4c4e-8329-7d1ac633ba86", 00:22:15.500 "is_configured": true, 00:22:15.500 "data_offset": 0, 00:22:15.500 "data_size": 65536 00:22:15.500 }, 00:22:15.500 { 00:22:15.500 "name": "BaseBdev3", 00:22:15.500 "uuid": "440fec82-d894-42bf-96a9-2c00be21ed2e", 00:22:15.500 "is_configured": true, 00:22:15.500 "data_offset": 0, 00:22:15.500 "data_size": 65536 00:22:15.500 } 00:22:15.500 ] 00:22:15.500 }' 00:22:15.500 07:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.500 07:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:15.760 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.018 [2024-11-20 07:19:40.089883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.018 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.018 [2024-11-20 07:19:40.237505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:16.018 [2024-11-20 07:19:40.237573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 BaseBdev2 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 [ 00:22:16.278 { 00:22:16.278 "name": "BaseBdev2", 00:22:16.278 "aliases": [ 00:22:16.278 "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a" 00:22:16.278 ], 00:22:16.278 "product_name": "Malloc disk", 00:22:16.278 "block_size": 512, 00:22:16.278 "num_blocks": 65536, 00:22:16.278 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:16.278 "assigned_rate_limits": { 00:22:16.278 "rw_ios_per_sec": 0, 00:22:16.278 "rw_mbytes_per_sec": 0, 00:22:16.278 "r_mbytes_per_sec": 0, 00:22:16.278 "w_mbytes_per_sec": 0 00:22:16.278 }, 00:22:16.278 "claimed": false, 00:22:16.278 "zoned": false, 00:22:16.278 "supported_io_types": { 00:22:16.278 "read": true, 00:22:16.278 "write": true, 00:22:16.278 "unmap": true, 00:22:16.278 "flush": true, 00:22:16.278 "reset": true, 00:22:16.278 "nvme_admin": false, 00:22:16.278 "nvme_io": false, 00:22:16.278 "nvme_io_md": false, 00:22:16.278 "write_zeroes": true, 00:22:16.278 "zcopy": true, 00:22:16.278 "get_zone_info": false, 00:22:16.278 "zone_management": false, 00:22:16.278 "zone_append": false, 00:22:16.278 "compare": false, 00:22:16.278 "compare_and_write": false, 00:22:16.278 "abort": true, 00:22:16.278 "seek_hole": false, 00:22:16.278 "seek_data": false, 00:22:16.278 "copy": true, 00:22:16.278 "nvme_iov_md": false 00:22:16.278 }, 00:22:16.278 "memory_domains": [ 00:22:16.278 { 00:22:16.278 "dma_device_id": "system", 00:22:16.278 "dma_device_type": 1 00:22:16.278 }, 00:22:16.278 { 00:22:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.278 "dma_device_type": 2 00:22:16.278 } 00:22:16.278 ], 00:22:16.278 "driver_specific": {} 00:22:16.278 } 00:22:16.278 ] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 BaseBdev3 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 [ 00:22:16.278 { 00:22:16.278 "name": "BaseBdev3", 00:22:16.278 "aliases": [ 00:22:16.278 "451605f6-8570-412b-9f78-5319bc7852ad" 00:22:16.278 ], 00:22:16.278 "product_name": "Malloc disk", 00:22:16.278 "block_size": 512, 00:22:16.278 "num_blocks": 65536, 00:22:16.278 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:16.278 "assigned_rate_limits": { 00:22:16.278 "rw_ios_per_sec": 0, 00:22:16.278 "rw_mbytes_per_sec": 0, 00:22:16.278 "r_mbytes_per_sec": 0, 00:22:16.278 "w_mbytes_per_sec": 0 00:22:16.278 }, 00:22:16.278 "claimed": false, 00:22:16.278 "zoned": false, 00:22:16.278 "supported_io_types": { 00:22:16.278 "read": true, 00:22:16.278 "write": true, 00:22:16.278 "unmap": true, 00:22:16.278 "flush": true, 00:22:16.278 "reset": true, 00:22:16.278 "nvme_admin": false, 00:22:16.278 "nvme_io": false, 00:22:16.278 "nvme_io_md": false, 00:22:16.278 "write_zeroes": true, 00:22:16.278 "zcopy": true, 00:22:16.278 "get_zone_info": false, 00:22:16.278 "zone_management": false, 00:22:16.278 "zone_append": false, 00:22:16.278 "compare": false, 00:22:16.278 "compare_and_write": false, 00:22:16.278 "abort": true, 00:22:16.278 "seek_hole": false, 00:22:16.278 "seek_data": false, 00:22:16.278 "copy": true, 00:22:16.278 "nvme_iov_md": false 00:22:16.278 }, 00:22:16.278 "memory_domains": [ 00:22:16.278 { 00:22:16.278 "dma_device_id": "system", 00:22:16.278 "dma_device_type": 1 00:22:16.278 }, 00:22:16.278 { 00:22:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.278 "dma_device_type": 2 00:22:16.278 } 00:22:16.278 ], 00:22:16.278 "driver_specific": {} 00:22:16.278 } 00:22:16.278 ] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.278 [2024-11-20 07:19:40.538618] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:16.278 [2024-11-20 07:19:40.538815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:16.278 [2024-11-20 07:19:40.538869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.278 [2024-11-20 07:19:40.541243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:16.278 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.279 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.537 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.537 "name": "Existed_Raid", 00:22:16.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.537 "strip_size_kb": 64, 00:22:16.537 "state": "configuring", 00:22:16.537 "raid_level": "raid0", 00:22:16.537 "superblock": false, 00:22:16.537 "num_base_bdevs": 3, 00:22:16.537 "num_base_bdevs_discovered": 2, 00:22:16.537 "num_base_bdevs_operational": 3, 00:22:16.537 "base_bdevs_list": [ 00:22:16.537 { 00:22:16.537 "name": "BaseBdev1", 00:22:16.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.537 "is_configured": false, 00:22:16.537 "data_offset": 0, 00:22:16.537 "data_size": 0 00:22:16.537 }, 00:22:16.537 { 00:22:16.537 "name": "BaseBdev2", 00:22:16.537 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:16.537 "is_configured": true, 00:22:16.537 "data_offset": 0, 00:22:16.537 "data_size": 65536 00:22:16.537 }, 00:22:16.537 { 00:22:16.537 "name": "BaseBdev3", 00:22:16.537 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:16.537 "is_configured": true, 00:22:16.537 "data_offset": 0, 00:22:16.537 "data_size": 65536 00:22:16.537 } 00:22:16.537 ] 00:22:16.537 }' 00:22:16.537 07:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.537 07:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.796 [2024-11-20 07:19:41.058846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:16.796 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.797 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.055 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.055 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.055 "name": "Existed_Raid", 00:22:17.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.055 "strip_size_kb": 64, 00:22:17.055 "state": "configuring", 00:22:17.055 "raid_level": "raid0", 00:22:17.055 "superblock": false, 00:22:17.055 "num_base_bdevs": 3, 00:22:17.055 "num_base_bdevs_discovered": 1, 00:22:17.055 "num_base_bdevs_operational": 3, 00:22:17.055 "base_bdevs_list": [ 00:22:17.055 { 00:22:17.055 "name": "BaseBdev1", 00:22:17.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.055 "is_configured": false, 00:22:17.055 "data_offset": 0, 00:22:17.055 "data_size": 0 00:22:17.055 }, 00:22:17.055 { 00:22:17.055 "name": null, 00:22:17.055 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:17.055 "is_configured": false, 00:22:17.055 "data_offset": 0, 00:22:17.055 "data_size": 65536 00:22:17.055 }, 00:22:17.055 { 00:22:17.055 "name": "BaseBdev3", 00:22:17.055 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:17.055 "is_configured": true, 00:22:17.055 "data_offset": 0, 00:22:17.055 "data_size": 65536 00:22:17.055 } 00:22:17.055 ] 00:22:17.055 }' 00:22:17.055 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.055 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 [2024-11-20 07:19:41.704122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.622 BaseBdev1 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 [ 00:22:17.622 { 00:22:17.622 "name": "BaseBdev1", 00:22:17.622 "aliases": [ 00:22:17.622 "b2e91b1a-d0c7-4cf8-9e0d-38629a418772" 00:22:17.622 ], 00:22:17.622 "product_name": "Malloc disk", 00:22:17.622 "block_size": 512, 00:22:17.622 "num_blocks": 65536, 00:22:17.622 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:17.622 "assigned_rate_limits": { 00:22:17.622 "rw_ios_per_sec": 0, 00:22:17.622 "rw_mbytes_per_sec": 0, 00:22:17.622 "r_mbytes_per_sec": 0, 00:22:17.622 "w_mbytes_per_sec": 0 00:22:17.622 }, 00:22:17.622 "claimed": true, 00:22:17.622 "claim_type": "exclusive_write", 00:22:17.622 "zoned": false, 00:22:17.622 "supported_io_types": { 00:22:17.622 "read": true, 00:22:17.622 "write": true, 00:22:17.622 "unmap": true, 00:22:17.622 "flush": true, 00:22:17.622 "reset": true, 00:22:17.622 "nvme_admin": false, 00:22:17.622 "nvme_io": false, 00:22:17.622 "nvme_io_md": false, 00:22:17.622 "write_zeroes": true, 00:22:17.622 "zcopy": true, 00:22:17.622 "get_zone_info": false, 00:22:17.623 "zone_management": false, 00:22:17.623 "zone_append": false, 00:22:17.623 "compare": false, 00:22:17.623 "compare_and_write": false, 00:22:17.623 "abort": true, 00:22:17.623 "seek_hole": false, 00:22:17.623 "seek_data": false, 00:22:17.623 "copy": true, 00:22:17.623 "nvme_iov_md": false 00:22:17.623 }, 00:22:17.623 "memory_domains": [ 00:22:17.623 { 00:22:17.623 "dma_device_id": "system", 00:22:17.623 "dma_device_type": 1 00:22:17.623 }, 00:22:17.623 { 00:22:17.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.623 "dma_device_type": 2 00:22:17.623 } 00:22:17.623 ], 00:22:17.623 "driver_specific": {} 00:22:17.623 } 00:22:17.623 ] 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.623 "name": "Existed_Raid", 00:22:17.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.623 "strip_size_kb": 64, 00:22:17.623 "state": "configuring", 00:22:17.623 "raid_level": "raid0", 00:22:17.623 "superblock": false, 00:22:17.623 "num_base_bdevs": 3, 00:22:17.623 "num_base_bdevs_discovered": 2, 00:22:17.623 "num_base_bdevs_operational": 3, 00:22:17.623 "base_bdevs_list": [ 00:22:17.623 { 00:22:17.623 "name": "BaseBdev1", 00:22:17.623 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:17.623 "is_configured": true, 00:22:17.623 "data_offset": 0, 00:22:17.623 "data_size": 65536 00:22:17.623 }, 00:22:17.623 { 00:22:17.623 "name": null, 00:22:17.623 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:17.623 "is_configured": false, 00:22:17.623 "data_offset": 0, 00:22:17.623 "data_size": 65536 00:22:17.623 }, 00:22:17.623 { 00:22:17.623 "name": "BaseBdev3", 00:22:17.623 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:17.623 "is_configured": true, 00:22:17.623 "data_offset": 0, 00:22:17.623 "data_size": 65536 00:22:17.623 } 00:22:17.623 ] 00:22:17.623 }' 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.623 07:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 [2024-11-20 07:19:42.312321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.190 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.190 "name": "Existed_Raid", 00:22:18.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.190 "strip_size_kb": 64, 00:22:18.190 "state": "configuring", 00:22:18.190 "raid_level": "raid0", 00:22:18.190 "superblock": false, 00:22:18.190 "num_base_bdevs": 3, 00:22:18.190 "num_base_bdevs_discovered": 1, 00:22:18.190 "num_base_bdevs_operational": 3, 00:22:18.190 "base_bdevs_list": [ 00:22:18.190 { 00:22:18.190 "name": "BaseBdev1", 00:22:18.190 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:18.190 "is_configured": true, 00:22:18.190 "data_offset": 0, 00:22:18.190 "data_size": 65536 00:22:18.190 }, 00:22:18.190 { 00:22:18.190 "name": null, 00:22:18.190 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:18.190 "is_configured": false, 00:22:18.190 "data_offset": 0, 00:22:18.190 "data_size": 65536 00:22:18.190 }, 00:22:18.190 { 00:22:18.191 "name": null, 00:22:18.191 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:18.191 "is_configured": false, 00:22:18.191 "data_offset": 0, 00:22:18.191 "data_size": 65536 00:22:18.191 } 00:22:18.191 ] 00:22:18.191 }' 00:22:18.191 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.191 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.828 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:18.828 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.828 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.829 [2024-11-20 07:19:42.876632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.829 "name": "Existed_Raid", 00:22:18.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.829 "strip_size_kb": 64, 00:22:18.829 "state": "configuring", 00:22:18.829 "raid_level": "raid0", 00:22:18.829 "superblock": false, 00:22:18.829 "num_base_bdevs": 3, 00:22:18.829 "num_base_bdevs_discovered": 2, 00:22:18.829 "num_base_bdevs_operational": 3, 00:22:18.829 "base_bdevs_list": [ 00:22:18.829 { 00:22:18.829 "name": "BaseBdev1", 00:22:18.829 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:18.829 "is_configured": true, 00:22:18.829 "data_offset": 0, 00:22:18.829 "data_size": 65536 00:22:18.829 }, 00:22:18.829 { 00:22:18.829 "name": null, 00:22:18.829 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:18.829 "is_configured": false, 00:22:18.829 "data_offset": 0, 00:22:18.829 "data_size": 65536 00:22:18.829 }, 00:22:18.829 { 00:22:18.829 "name": "BaseBdev3", 00:22:18.829 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:18.829 "is_configured": true, 00:22:18.829 "data_offset": 0, 00:22:18.829 "data_size": 65536 00:22:18.829 } 00:22:18.829 ] 00:22:18.829 }' 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.829 07:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.395 [2024-11-20 07:19:43.460753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.395 "name": "Existed_Raid", 00:22:19.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.395 "strip_size_kb": 64, 00:22:19.395 "state": "configuring", 00:22:19.395 "raid_level": "raid0", 00:22:19.395 "superblock": false, 00:22:19.395 "num_base_bdevs": 3, 00:22:19.395 "num_base_bdevs_discovered": 1, 00:22:19.395 "num_base_bdevs_operational": 3, 00:22:19.395 "base_bdevs_list": [ 00:22:19.395 { 00:22:19.395 "name": null, 00:22:19.395 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:19.395 "is_configured": false, 00:22:19.395 "data_offset": 0, 00:22:19.395 "data_size": 65536 00:22:19.395 }, 00:22:19.395 { 00:22:19.395 "name": null, 00:22:19.395 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:19.395 "is_configured": false, 00:22:19.395 "data_offset": 0, 00:22:19.395 "data_size": 65536 00:22:19.395 }, 00:22:19.395 { 00:22:19.395 "name": "BaseBdev3", 00:22:19.395 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:19.395 "is_configured": true, 00:22:19.395 "data_offset": 0, 00:22:19.395 "data_size": 65536 00:22:19.395 } 00:22:19.395 ] 00:22:19.395 }' 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.395 07:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.964 [2024-11-20 07:19:44.114121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.964 "name": "Existed_Raid", 00:22:19.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.964 "strip_size_kb": 64, 00:22:19.964 "state": "configuring", 00:22:19.964 "raid_level": "raid0", 00:22:19.964 "superblock": false, 00:22:19.964 "num_base_bdevs": 3, 00:22:19.964 "num_base_bdevs_discovered": 2, 00:22:19.964 "num_base_bdevs_operational": 3, 00:22:19.964 "base_bdevs_list": [ 00:22:19.964 { 00:22:19.964 "name": null, 00:22:19.964 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:19.964 "is_configured": false, 00:22:19.964 "data_offset": 0, 00:22:19.964 "data_size": 65536 00:22:19.964 }, 00:22:19.964 { 00:22:19.964 "name": "BaseBdev2", 00:22:19.964 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:19.964 "is_configured": true, 00:22:19.964 "data_offset": 0, 00:22:19.964 "data_size": 65536 00:22:19.964 }, 00:22:19.964 { 00:22:19.964 "name": "BaseBdev3", 00:22:19.964 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:19.964 "is_configured": true, 00:22:19.964 "data_offset": 0, 00:22:19.964 "data_size": 65536 00:22:19.964 } 00:22:19.964 ] 00:22:19.964 }' 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.964 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2e91b1a-d0c7-4cf8-9e0d-38629a418772 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 [2024-11-20 07:19:44.792675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:20.532 [2024-11-20 07:19:44.792732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:20.532 [2024-11-20 07:19:44.792748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:20.532 [2024-11-20 07:19:44.793057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:20.532 [2024-11-20 07:19:44.793259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:20.532 [2024-11-20 07:19:44.793275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:20.532 [2024-11-20 07:19:44.793624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.532 NewBaseBdev 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.532 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 [ 00:22:20.532 { 00:22:20.532 "name": "NewBaseBdev", 00:22:20.532 "aliases": [ 00:22:20.532 "b2e91b1a-d0c7-4cf8-9e0d-38629a418772" 00:22:20.532 ], 00:22:20.532 "product_name": "Malloc disk", 00:22:20.532 "block_size": 512, 00:22:20.532 "num_blocks": 65536, 00:22:20.532 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:20.532 "assigned_rate_limits": { 00:22:20.532 "rw_ios_per_sec": 0, 00:22:20.532 "rw_mbytes_per_sec": 0, 00:22:20.532 "r_mbytes_per_sec": 0, 00:22:20.532 "w_mbytes_per_sec": 0 00:22:20.532 }, 00:22:20.532 "claimed": true, 00:22:20.532 "claim_type": "exclusive_write", 00:22:20.532 "zoned": false, 00:22:20.532 "supported_io_types": { 00:22:20.532 "read": true, 00:22:20.532 "write": true, 00:22:20.532 "unmap": true, 00:22:20.532 "flush": true, 00:22:20.532 "reset": true, 00:22:20.532 "nvme_admin": false, 00:22:20.791 "nvme_io": false, 00:22:20.791 "nvme_io_md": false, 00:22:20.791 "write_zeroes": true, 00:22:20.791 "zcopy": true, 00:22:20.791 "get_zone_info": false, 00:22:20.791 "zone_management": false, 00:22:20.791 "zone_append": false, 00:22:20.791 "compare": false, 00:22:20.791 "compare_and_write": false, 00:22:20.791 "abort": true, 00:22:20.791 "seek_hole": false, 00:22:20.791 "seek_data": false, 00:22:20.791 "copy": true, 00:22:20.791 "nvme_iov_md": false 00:22:20.791 }, 00:22:20.791 "memory_domains": [ 00:22:20.791 { 00:22:20.791 "dma_device_id": "system", 00:22:20.791 "dma_device_type": 1 00:22:20.791 }, 00:22:20.791 { 00:22:20.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.791 "dma_device_type": 2 00:22:20.791 } 00:22:20.791 ], 00:22:20.791 "driver_specific": {} 00:22:20.791 } 00:22:20.791 ] 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.791 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.791 "name": "Existed_Raid", 00:22:20.791 "uuid": "96dc2756-2efc-44fd-94ed-bf3631a29d63", 00:22:20.791 "strip_size_kb": 64, 00:22:20.791 "state": "online", 00:22:20.791 "raid_level": "raid0", 00:22:20.791 "superblock": false, 00:22:20.791 "num_base_bdevs": 3, 00:22:20.791 "num_base_bdevs_discovered": 3, 00:22:20.791 "num_base_bdevs_operational": 3, 00:22:20.791 "base_bdevs_list": [ 00:22:20.791 { 00:22:20.792 "name": "NewBaseBdev", 00:22:20.792 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:20.792 "is_configured": true, 00:22:20.792 "data_offset": 0, 00:22:20.792 "data_size": 65536 00:22:20.792 }, 00:22:20.792 { 00:22:20.792 "name": "BaseBdev2", 00:22:20.792 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:20.792 "is_configured": true, 00:22:20.792 "data_offset": 0, 00:22:20.792 "data_size": 65536 00:22:20.792 }, 00:22:20.792 { 00:22:20.792 "name": "BaseBdev3", 00:22:20.792 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:20.792 "is_configured": true, 00:22:20.792 "data_offset": 0, 00:22:20.792 "data_size": 65536 00:22:20.792 } 00:22:20.792 ] 00:22:20.792 }' 00:22:20.792 07:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.792 07:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.050 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.310 [2024-11-20 07:19:45.341330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:21.310 "name": "Existed_Raid", 00:22:21.310 "aliases": [ 00:22:21.310 "96dc2756-2efc-44fd-94ed-bf3631a29d63" 00:22:21.310 ], 00:22:21.310 "product_name": "Raid Volume", 00:22:21.310 "block_size": 512, 00:22:21.310 "num_blocks": 196608, 00:22:21.310 "uuid": "96dc2756-2efc-44fd-94ed-bf3631a29d63", 00:22:21.310 "assigned_rate_limits": { 00:22:21.310 "rw_ios_per_sec": 0, 00:22:21.310 "rw_mbytes_per_sec": 0, 00:22:21.310 "r_mbytes_per_sec": 0, 00:22:21.310 "w_mbytes_per_sec": 0 00:22:21.310 }, 00:22:21.310 "claimed": false, 00:22:21.310 "zoned": false, 00:22:21.310 "supported_io_types": { 00:22:21.310 "read": true, 00:22:21.310 "write": true, 00:22:21.310 "unmap": true, 00:22:21.310 "flush": true, 00:22:21.310 "reset": true, 00:22:21.310 "nvme_admin": false, 00:22:21.310 "nvme_io": false, 00:22:21.310 "nvme_io_md": false, 00:22:21.310 "write_zeroes": true, 00:22:21.310 "zcopy": false, 00:22:21.310 "get_zone_info": false, 00:22:21.310 "zone_management": false, 00:22:21.310 "zone_append": false, 00:22:21.310 "compare": false, 00:22:21.310 "compare_and_write": false, 00:22:21.310 "abort": false, 00:22:21.310 "seek_hole": false, 00:22:21.310 "seek_data": false, 00:22:21.310 "copy": false, 00:22:21.310 "nvme_iov_md": false 00:22:21.310 }, 00:22:21.310 "memory_domains": [ 00:22:21.310 { 00:22:21.310 "dma_device_id": "system", 00:22:21.310 "dma_device_type": 1 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.310 "dma_device_type": 2 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "dma_device_id": "system", 00:22:21.310 "dma_device_type": 1 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.310 "dma_device_type": 2 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "dma_device_id": "system", 00:22:21.310 "dma_device_type": 1 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.310 "dma_device_type": 2 00:22:21.310 } 00:22:21.310 ], 00:22:21.310 "driver_specific": { 00:22:21.310 "raid": { 00:22:21.310 "uuid": "96dc2756-2efc-44fd-94ed-bf3631a29d63", 00:22:21.310 "strip_size_kb": 64, 00:22:21.310 "state": "online", 00:22:21.310 "raid_level": "raid0", 00:22:21.310 "superblock": false, 00:22:21.310 "num_base_bdevs": 3, 00:22:21.310 "num_base_bdevs_discovered": 3, 00:22:21.310 "num_base_bdevs_operational": 3, 00:22:21.310 "base_bdevs_list": [ 00:22:21.310 { 00:22:21.310 "name": "NewBaseBdev", 00:22:21.310 "uuid": "b2e91b1a-d0c7-4cf8-9e0d-38629a418772", 00:22:21.310 "is_configured": true, 00:22:21.310 "data_offset": 0, 00:22:21.310 "data_size": 65536 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "name": "BaseBdev2", 00:22:21.310 "uuid": "7397bee2-fc29-4aa1-aa06-ba2c6308bb4a", 00:22:21.310 "is_configured": true, 00:22:21.310 "data_offset": 0, 00:22:21.310 "data_size": 65536 00:22:21.310 }, 00:22:21.310 { 00:22:21.310 "name": "BaseBdev3", 00:22:21.310 "uuid": "451605f6-8570-412b-9f78-5319bc7852ad", 00:22:21.310 "is_configured": true, 00:22:21.310 "data_offset": 0, 00:22:21.310 "data_size": 65536 00:22:21.310 } 00:22:21.310 ] 00:22:21.310 } 00:22:21.310 } 00:22:21.310 }' 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:21.310 BaseBdev2 00:22:21.310 BaseBdev3' 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.310 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.311 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.570 [2024-11-20 07:19:45.640943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:21.570 [2024-11-20 07:19:45.641020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:21.570 [2024-11-20 07:19:45.641106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.570 [2024-11-20 07:19:45.641172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.570 [2024-11-20 07:19:45.641193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64016 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64016 ']' 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64016 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64016 00:22:21.570 killing process with pid 64016 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64016' 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64016 00:22:21.570 [2024-11-20 07:19:45.683415] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:21.570 07:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64016 00:22:21.829 [2024-11-20 07:19:45.946270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:22.767 ************************************ 00:22:22.767 END TEST raid_state_function_test 00:22:22.767 ************************************ 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:22.767 00:22:22.767 real 0m11.935s 00:22:22.767 user 0m19.868s 00:22:22.767 sys 0m1.630s 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.767 07:19:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:22:22.767 07:19:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:22.767 07:19:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.767 07:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.767 ************************************ 00:22:22.767 START TEST raid_state_function_test_sb 00:22:22.767 ************************************ 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:22.767 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:22.768 Process raid pid: 64654 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64654 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64654' 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64654 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64654 ']' 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.768 07:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.027 [2024-11-20 07:19:47.102939] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:23.027 [2024-11-20 07:19:47.103390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.027 [2024-11-20 07:19:47.289567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.302 [2024-11-20 07:19:47.416559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.594 [2024-11-20 07:19:47.623176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.594 [2024-11-20 07:19:47.623454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.853 [2024-11-20 07:19:48.094282] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:23.853 [2024-11-20 07:19:48.094366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:23.853 [2024-11-20 07:19:48.094384] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:23.853 [2024-11-20 07:19:48.094400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:23.853 [2024-11-20 07:19:48.094409] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:23.853 [2024-11-20 07:19:48.094423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.853 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.111 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.111 "name": "Existed_Raid", 00:22:24.111 "uuid": "62d85af6-5e3f-48be-b49e-89a4015f306c", 00:22:24.111 "strip_size_kb": 64, 00:22:24.111 "state": "configuring", 00:22:24.111 "raid_level": "raid0", 00:22:24.111 "superblock": true, 00:22:24.111 "num_base_bdevs": 3, 00:22:24.111 "num_base_bdevs_discovered": 0, 00:22:24.111 "num_base_bdevs_operational": 3, 00:22:24.111 "base_bdevs_list": [ 00:22:24.111 { 00:22:24.111 "name": "BaseBdev1", 00:22:24.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.111 "is_configured": false, 00:22:24.111 "data_offset": 0, 00:22:24.111 "data_size": 0 00:22:24.111 }, 00:22:24.111 { 00:22:24.111 "name": "BaseBdev2", 00:22:24.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.111 "is_configured": false, 00:22:24.111 "data_offset": 0, 00:22:24.111 "data_size": 0 00:22:24.111 }, 00:22:24.111 { 00:22:24.111 "name": "BaseBdev3", 00:22:24.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.111 "is_configured": false, 00:22:24.111 "data_offset": 0, 00:22:24.111 "data_size": 0 00:22:24.111 } 00:22:24.111 ] 00:22:24.111 }' 00:22:24.111 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.111 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.369 [2024-11-20 07:19:48.622338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:24.369 [2024-11-20 07:19:48.622553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.369 [2024-11-20 07:19:48.634402] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:24.369 [2024-11-20 07:19:48.634625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:24.369 [2024-11-20 07:19:48.634792] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:24.369 [2024-11-20 07:19:48.634857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:24.369 [2024-11-20 07:19:48.634964] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:24.369 [2024-11-20 07:19:48.635121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.369 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.628 [2024-11-20 07:19:48.684515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.628 BaseBdev1 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.628 [ 00:22:24.628 { 00:22:24.628 "name": "BaseBdev1", 00:22:24.628 "aliases": [ 00:22:24.628 "3243452b-96bd-4d49-b30d-c296bf317f9a" 00:22:24.628 ], 00:22:24.628 "product_name": "Malloc disk", 00:22:24.628 "block_size": 512, 00:22:24.628 "num_blocks": 65536, 00:22:24.628 "uuid": "3243452b-96bd-4d49-b30d-c296bf317f9a", 00:22:24.628 "assigned_rate_limits": { 00:22:24.628 "rw_ios_per_sec": 0, 00:22:24.628 "rw_mbytes_per_sec": 0, 00:22:24.628 "r_mbytes_per_sec": 0, 00:22:24.628 "w_mbytes_per_sec": 0 00:22:24.628 }, 00:22:24.628 "claimed": true, 00:22:24.628 "claim_type": "exclusive_write", 00:22:24.628 "zoned": false, 00:22:24.628 "supported_io_types": { 00:22:24.628 "read": true, 00:22:24.628 "write": true, 00:22:24.628 "unmap": true, 00:22:24.628 "flush": true, 00:22:24.628 "reset": true, 00:22:24.628 "nvme_admin": false, 00:22:24.628 "nvme_io": false, 00:22:24.628 "nvme_io_md": false, 00:22:24.628 "write_zeroes": true, 00:22:24.628 "zcopy": true, 00:22:24.628 "get_zone_info": false, 00:22:24.628 "zone_management": false, 00:22:24.628 "zone_append": false, 00:22:24.628 "compare": false, 00:22:24.628 "compare_and_write": false, 00:22:24.628 "abort": true, 00:22:24.628 "seek_hole": false, 00:22:24.628 "seek_data": false, 00:22:24.628 "copy": true, 00:22:24.628 "nvme_iov_md": false 00:22:24.628 }, 00:22:24.628 "memory_domains": [ 00:22:24.628 { 00:22:24.628 "dma_device_id": "system", 00:22:24.628 "dma_device_type": 1 00:22:24.628 }, 00:22:24.628 { 00:22:24.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.628 "dma_device_type": 2 00:22:24.628 } 00:22:24.628 ], 00:22:24.628 "driver_specific": {} 00:22:24.628 } 00:22:24.628 ] 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.628 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.629 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.629 "name": "Existed_Raid", 00:22:24.629 "uuid": "7bc3db66-efed-4231-9c63-57130f847a73", 00:22:24.629 "strip_size_kb": 64, 00:22:24.629 "state": "configuring", 00:22:24.629 "raid_level": "raid0", 00:22:24.629 "superblock": true, 00:22:24.629 "num_base_bdevs": 3, 00:22:24.629 "num_base_bdevs_discovered": 1, 00:22:24.629 "num_base_bdevs_operational": 3, 00:22:24.629 "base_bdevs_list": [ 00:22:24.629 { 00:22:24.629 "name": "BaseBdev1", 00:22:24.629 "uuid": "3243452b-96bd-4d49-b30d-c296bf317f9a", 00:22:24.629 "is_configured": true, 00:22:24.629 "data_offset": 2048, 00:22:24.629 "data_size": 63488 00:22:24.629 }, 00:22:24.629 { 00:22:24.629 "name": "BaseBdev2", 00:22:24.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.629 "is_configured": false, 00:22:24.629 "data_offset": 0, 00:22:24.629 "data_size": 0 00:22:24.629 }, 00:22:24.629 { 00:22:24.629 "name": "BaseBdev3", 00:22:24.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.629 "is_configured": false, 00:22:24.629 "data_offset": 0, 00:22:24.629 "data_size": 0 00:22:24.629 } 00:22:24.629 ] 00:22:24.629 }' 00:22:24.629 07:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.629 07:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.194 [2024-11-20 07:19:49.216710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:25.194 [2024-11-20 07:19:49.216777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.194 [2024-11-20 07:19:49.224743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.194 [2024-11-20 07:19:49.227386] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:25.194 [2024-11-20 07:19:49.227436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:25.194 [2024-11-20 07:19:49.227470] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:25.194 [2024-11-20 07:19:49.227485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.194 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.195 "name": "Existed_Raid", 00:22:25.195 "uuid": "990b14e9-1360-463b-ba90-8185f67df72b", 00:22:25.195 "strip_size_kb": 64, 00:22:25.195 "state": "configuring", 00:22:25.195 "raid_level": "raid0", 00:22:25.195 "superblock": true, 00:22:25.195 "num_base_bdevs": 3, 00:22:25.195 "num_base_bdevs_discovered": 1, 00:22:25.195 "num_base_bdevs_operational": 3, 00:22:25.195 "base_bdevs_list": [ 00:22:25.195 { 00:22:25.195 "name": "BaseBdev1", 00:22:25.195 "uuid": "3243452b-96bd-4d49-b30d-c296bf317f9a", 00:22:25.195 "is_configured": true, 00:22:25.195 "data_offset": 2048, 00:22:25.195 "data_size": 63488 00:22:25.195 }, 00:22:25.195 { 00:22:25.195 "name": "BaseBdev2", 00:22:25.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.195 "is_configured": false, 00:22:25.195 "data_offset": 0, 00:22:25.195 "data_size": 0 00:22:25.195 }, 00:22:25.195 { 00:22:25.195 "name": "BaseBdev3", 00:22:25.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.195 "is_configured": false, 00:22:25.195 "data_offset": 0, 00:22:25.195 "data_size": 0 00:22:25.195 } 00:22:25.195 ] 00:22:25.195 }' 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.195 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.761 [2024-11-20 07:19:49.788434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.761 BaseBdev2 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.761 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 [ 00:22:25.762 { 00:22:25.762 "name": "BaseBdev2", 00:22:25.762 "aliases": [ 00:22:25.762 "82ffb1a3-e29a-496f-bc4c-17d134141b34" 00:22:25.762 ], 00:22:25.762 "product_name": "Malloc disk", 00:22:25.762 "block_size": 512, 00:22:25.762 "num_blocks": 65536, 00:22:25.762 "uuid": "82ffb1a3-e29a-496f-bc4c-17d134141b34", 00:22:25.762 "assigned_rate_limits": { 00:22:25.762 "rw_ios_per_sec": 0, 00:22:25.762 "rw_mbytes_per_sec": 0, 00:22:25.762 "r_mbytes_per_sec": 0, 00:22:25.762 "w_mbytes_per_sec": 0 00:22:25.762 }, 00:22:25.762 "claimed": true, 00:22:25.762 "claim_type": "exclusive_write", 00:22:25.762 "zoned": false, 00:22:25.762 "supported_io_types": { 00:22:25.762 "read": true, 00:22:25.762 "write": true, 00:22:25.762 "unmap": true, 00:22:25.762 "flush": true, 00:22:25.762 "reset": true, 00:22:25.762 "nvme_admin": false, 00:22:25.762 "nvme_io": false, 00:22:25.762 "nvme_io_md": false, 00:22:25.762 "write_zeroes": true, 00:22:25.762 "zcopy": true, 00:22:25.762 "get_zone_info": false, 00:22:25.762 "zone_management": false, 00:22:25.762 "zone_append": false, 00:22:25.762 "compare": false, 00:22:25.762 "compare_and_write": false, 00:22:25.762 "abort": true, 00:22:25.762 "seek_hole": false, 00:22:25.762 "seek_data": false, 00:22:25.762 "copy": true, 00:22:25.762 "nvme_iov_md": false 00:22:25.762 }, 00:22:25.762 "memory_domains": [ 00:22:25.762 { 00:22:25.762 "dma_device_id": "system", 00:22:25.762 "dma_device_type": 1 00:22:25.762 }, 00:22:25.762 { 00:22:25.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.762 "dma_device_type": 2 00:22:25.762 } 00:22:25.762 ], 00:22:25.762 "driver_specific": {} 00:22:25.762 } 00:22:25.762 ] 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.762 "name": "Existed_Raid", 00:22:25.762 "uuid": "990b14e9-1360-463b-ba90-8185f67df72b", 00:22:25.762 "strip_size_kb": 64, 00:22:25.762 "state": "configuring", 00:22:25.762 "raid_level": "raid0", 00:22:25.762 "superblock": true, 00:22:25.762 "num_base_bdevs": 3, 00:22:25.762 "num_base_bdevs_discovered": 2, 00:22:25.762 "num_base_bdevs_operational": 3, 00:22:25.762 "base_bdevs_list": [ 00:22:25.762 { 00:22:25.762 "name": "BaseBdev1", 00:22:25.762 "uuid": "3243452b-96bd-4d49-b30d-c296bf317f9a", 00:22:25.762 "is_configured": true, 00:22:25.762 "data_offset": 2048, 00:22:25.762 "data_size": 63488 00:22:25.762 }, 00:22:25.762 { 00:22:25.762 "name": "BaseBdev2", 00:22:25.762 "uuid": "82ffb1a3-e29a-496f-bc4c-17d134141b34", 00:22:25.762 "is_configured": true, 00:22:25.762 "data_offset": 2048, 00:22:25.762 "data_size": 63488 00:22:25.762 }, 00:22:25.762 { 00:22:25.762 "name": "BaseBdev3", 00:22:25.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.762 "is_configured": false, 00:22:25.762 "data_offset": 0, 00:22:25.762 "data_size": 0 00:22:25.762 } 00:22:25.762 ] 00:22:25.762 }' 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.762 07:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.330 [2024-11-20 07:19:50.373294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:26.330 [2024-11-20 07:19:50.373839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:26.330 [2024-11-20 07:19:50.374000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:26.330 BaseBdev3 00:22:26.330 [2024-11-20 07:19:50.374387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:26.330 [2024-11-20 07:19:50.374838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:26.330 [2024-11-20 07:19:50.375000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.330 [2024-11-20 07:19:50.375310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.330 [ 00:22:26.330 { 00:22:26.330 "name": "BaseBdev3", 00:22:26.330 "aliases": [ 00:22:26.330 "9a790946-fc74-4500-9114-0f8ae8081bb1" 00:22:26.330 ], 00:22:26.330 "product_name": "Malloc disk", 00:22:26.330 "block_size": 512, 00:22:26.330 "num_blocks": 65536, 00:22:26.330 "uuid": "9a790946-fc74-4500-9114-0f8ae8081bb1", 00:22:26.330 "assigned_rate_limits": { 00:22:26.330 "rw_ios_per_sec": 0, 00:22:26.330 "rw_mbytes_per_sec": 0, 00:22:26.330 "r_mbytes_per_sec": 0, 00:22:26.330 "w_mbytes_per_sec": 0 00:22:26.330 }, 00:22:26.330 "claimed": true, 00:22:26.330 "claim_type": "exclusive_write", 00:22:26.330 "zoned": false, 00:22:26.330 "supported_io_types": { 00:22:26.330 "read": true, 00:22:26.330 "write": true, 00:22:26.330 "unmap": true, 00:22:26.330 "flush": true, 00:22:26.330 "reset": true, 00:22:26.330 "nvme_admin": false, 00:22:26.330 "nvme_io": false, 00:22:26.330 "nvme_io_md": false, 00:22:26.330 "write_zeroes": true, 00:22:26.330 "zcopy": true, 00:22:26.330 "get_zone_info": false, 00:22:26.330 "zone_management": false, 00:22:26.330 "zone_append": false, 00:22:26.330 "compare": false, 00:22:26.330 "compare_and_write": false, 00:22:26.330 "abort": true, 00:22:26.330 "seek_hole": false, 00:22:26.330 "seek_data": false, 00:22:26.330 "copy": true, 00:22:26.330 "nvme_iov_md": false 00:22:26.330 }, 00:22:26.330 "memory_domains": [ 00:22:26.330 { 00:22:26.330 "dma_device_id": "system", 00:22:26.330 "dma_device_type": 1 00:22:26.330 }, 00:22:26.330 { 00:22:26.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.330 "dma_device_type": 2 00:22:26.330 } 00:22:26.330 ], 00:22:26.330 "driver_specific": {} 00:22:26.330 } 00:22:26.330 ] 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.330 "name": "Existed_Raid", 00:22:26.330 "uuid": "990b14e9-1360-463b-ba90-8185f67df72b", 00:22:26.330 "strip_size_kb": 64, 00:22:26.330 "state": "online", 00:22:26.330 "raid_level": "raid0", 00:22:26.330 "superblock": true, 00:22:26.330 "num_base_bdevs": 3, 00:22:26.330 "num_base_bdevs_discovered": 3, 00:22:26.330 "num_base_bdevs_operational": 3, 00:22:26.330 "base_bdevs_list": [ 00:22:26.330 { 00:22:26.330 "name": "BaseBdev1", 00:22:26.330 "uuid": "3243452b-96bd-4d49-b30d-c296bf317f9a", 00:22:26.330 "is_configured": true, 00:22:26.330 "data_offset": 2048, 00:22:26.330 "data_size": 63488 00:22:26.330 }, 00:22:26.330 { 00:22:26.330 "name": "BaseBdev2", 00:22:26.330 "uuid": "82ffb1a3-e29a-496f-bc4c-17d134141b34", 00:22:26.330 "is_configured": true, 00:22:26.330 "data_offset": 2048, 00:22:26.330 "data_size": 63488 00:22:26.330 }, 00:22:26.330 { 00:22:26.330 "name": "BaseBdev3", 00:22:26.330 "uuid": "9a790946-fc74-4500-9114-0f8ae8081bb1", 00:22:26.330 "is_configured": true, 00:22:26.330 "data_offset": 2048, 00:22:26.330 "data_size": 63488 00:22:26.330 } 00:22:26.330 ] 00:22:26.330 }' 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.330 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.898 [2024-11-20 07:19:50.934005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.898 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:26.898 "name": "Existed_Raid", 00:22:26.898 "aliases": [ 00:22:26.898 "990b14e9-1360-463b-ba90-8185f67df72b" 00:22:26.898 ], 00:22:26.898 "product_name": "Raid Volume", 00:22:26.898 "block_size": 512, 00:22:26.898 "num_blocks": 190464, 00:22:26.898 "uuid": "990b14e9-1360-463b-ba90-8185f67df72b", 00:22:26.898 "assigned_rate_limits": { 00:22:26.898 "rw_ios_per_sec": 0, 00:22:26.898 "rw_mbytes_per_sec": 0, 00:22:26.898 "r_mbytes_per_sec": 0, 00:22:26.898 "w_mbytes_per_sec": 0 00:22:26.898 }, 00:22:26.898 "claimed": false, 00:22:26.898 "zoned": false, 00:22:26.898 "supported_io_types": { 00:22:26.898 "read": true, 00:22:26.898 "write": true, 00:22:26.898 "unmap": true, 00:22:26.898 "flush": true, 00:22:26.898 "reset": true, 00:22:26.898 "nvme_admin": false, 00:22:26.898 "nvme_io": false, 00:22:26.898 "nvme_io_md": false, 00:22:26.898 "write_zeroes": true, 00:22:26.898 "zcopy": false, 00:22:26.898 "get_zone_info": false, 00:22:26.899 "zone_management": false, 00:22:26.899 "zone_append": false, 00:22:26.899 "compare": false, 00:22:26.899 "compare_and_write": false, 00:22:26.899 "abort": false, 00:22:26.899 "seek_hole": false, 00:22:26.899 "seek_data": false, 00:22:26.899 "copy": false, 00:22:26.899 "nvme_iov_md": false 00:22:26.899 }, 00:22:26.899 "memory_domains": [ 00:22:26.899 { 00:22:26.899 "dma_device_id": "system", 00:22:26.899 "dma_device_type": 1 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.899 "dma_device_type": 2 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "dma_device_id": "system", 00:22:26.899 "dma_device_type": 1 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.899 "dma_device_type": 2 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "dma_device_id": "system", 00:22:26.899 "dma_device_type": 1 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.899 "dma_device_type": 2 00:22:26.899 } 00:22:26.899 ], 00:22:26.899 "driver_specific": { 00:22:26.899 "raid": { 00:22:26.899 "uuid": "990b14e9-1360-463b-ba90-8185f67df72b", 00:22:26.899 "strip_size_kb": 64, 00:22:26.899 "state": "online", 00:22:26.899 "raid_level": "raid0", 00:22:26.899 "superblock": true, 00:22:26.899 "num_base_bdevs": 3, 00:22:26.899 "num_base_bdevs_discovered": 3, 00:22:26.899 "num_base_bdevs_operational": 3, 00:22:26.899 "base_bdevs_list": [ 00:22:26.899 { 00:22:26.899 "name": "BaseBdev1", 00:22:26.899 "uuid": "3243452b-96bd-4d49-b30d-c296bf317f9a", 00:22:26.899 "is_configured": true, 00:22:26.899 "data_offset": 2048, 00:22:26.899 "data_size": 63488 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "name": "BaseBdev2", 00:22:26.899 "uuid": "82ffb1a3-e29a-496f-bc4c-17d134141b34", 00:22:26.899 "is_configured": true, 00:22:26.899 "data_offset": 2048, 00:22:26.899 "data_size": 63488 00:22:26.899 }, 00:22:26.899 { 00:22:26.899 "name": "BaseBdev3", 00:22:26.899 "uuid": "9a790946-fc74-4500-9114-0f8ae8081bb1", 00:22:26.899 "is_configured": true, 00:22:26.899 "data_offset": 2048, 00:22:26.899 "data_size": 63488 00:22:26.899 } 00:22:26.899 ] 00:22:26.899 } 00:22:26.899 } 00:22:26.899 }' 00:22:26.899 07:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:26.899 BaseBdev2 00:22:26.899 BaseBdev3' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.899 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.158 [2024-11-20 07:19:51.241680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:27.158 [2024-11-20 07:19:51.241717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:27.158 [2024-11-20 07:19:51.241789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.158 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.158 "name": "Existed_Raid", 00:22:27.158 "uuid": "990b14e9-1360-463b-ba90-8185f67df72b", 00:22:27.158 "strip_size_kb": 64, 00:22:27.159 "state": "offline", 00:22:27.159 "raid_level": "raid0", 00:22:27.159 "superblock": true, 00:22:27.159 "num_base_bdevs": 3, 00:22:27.159 "num_base_bdevs_discovered": 2, 00:22:27.159 "num_base_bdevs_operational": 2, 00:22:27.159 "base_bdevs_list": [ 00:22:27.159 { 00:22:27.159 "name": null, 00:22:27.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.159 "is_configured": false, 00:22:27.159 "data_offset": 0, 00:22:27.159 "data_size": 63488 00:22:27.159 }, 00:22:27.159 { 00:22:27.159 "name": "BaseBdev2", 00:22:27.159 "uuid": "82ffb1a3-e29a-496f-bc4c-17d134141b34", 00:22:27.159 "is_configured": true, 00:22:27.159 "data_offset": 2048, 00:22:27.159 "data_size": 63488 00:22:27.159 }, 00:22:27.159 { 00:22:27.159 "name": "BaseBdev3", 00:22:27.159 "uuid": "9a790946-fc74-4500-9114-0f8ae8081bb1", 00:22:27.159 "is_configured": true, 00:22:27.159 "data_offset": 2048, 00:22:27.159 "data_size": 63488 00:22:27.159 } 00:22:27.159 ] 00:22:27.159 }' 00:22:27.159 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.159 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.726 [2024-11-20 07:19:51.841827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.726 07:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.726 [2024-11-20 07:19:51.995538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:27.726 [2024-11-20 07:19:51.995622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.986 BaseBdev2 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.986 [ 00:22:27.986 { 00:22:27.986 "name": "BaseBdev2", 00:22:27.986 "aliases": [ 00:22:27.986 "32eb8497-a8b8-48c4-be88-a59ec1a4271b" 00:22:27.986 ], 00:22:27.986 "product_name": "Malloc disk", 00:22:27.986 "block_size": 512, 00:22:27.986 "num_blocks": 65536, 00:22:27.986 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:27.986 "assigned_rate_limits": { 00:22:27.986 "rw_ios_per_sec": 0, 00:22:27.986 "rw_mbytes_per_sec": 0, 00:22:27.986 "r_mbytes_per_sec": 0, 00:22:27.986 "w_mbytes_per_sec": 0 00:22:27.986 }, 00:22:27.986 "claimed": false, 00:22:27.986 "zoned": false, 00:22:27.986 "supported_io_types": { 00:22:27.986 "read": true, 00:22:27.986 "write": true, 00:22:27.986 "unmap": true, 00:22:27.986 "flush": true, 00:22:27.986 "reset": true, 00:22:27.986 "nvme_admin": false, 00:22:27.986 "nvme_io": false, 00:22:27.986 "nvme_io_md": false, 00:22:27.986 "write_zeroes": true, 00:22:27.986 "zcopy": true, 00:22:27.986 "get_zone_info": false, 00:22:27.986 "zone_management": false, 00:22:27.986 "zone_append": false, 00:22:27.986 "compare": false, 00:22:27.986 "compare_and_write": false, 00:22:27.986 "abort": true, 00:22:27.986 "seek_hole": false, 00:22:27.986 "seek_data": false, 00:22:27.986 "copy": true, 00:22:27.986 "nvme_iov_md": false 00:22:27.986 }, 00:22:27.986 "memory_domains": [ 00:22:27.986 { 00:22:27.986 "dma_device_id": "system", 00:22:27.986 "dma_device_type": 1 00:22:27.986 }, 00:22:27.986 { 00:22:27.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.986 "dma_device_type": 2 00:22:27.986 } 00:22:27.986 ], 00:22:27.986 "driver_specific": {} 00:22:27.986 } 00:22:27.986 ] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.986 BaseBdev3 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.986 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.279 [ 00:22:28.279 { 00:22:28.279 "name": "BaseBdev3", 00:22:28.279 "aliases": [ 00:22:28.279 "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa" 00:22:28.279 ], 00:22:28.279 "product_name": "Malloc disk", 00:22:28.279 "block_size": 512, 00:22:28.279 "num_blocks": 65536, 00:22:28.279 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:28.279 "assigned_rate_limits": { 00:22:28.279 "rw_ios_per_sec": 0, 00:22:28.279 "rw_mbytes_per_sec": 0, 00:22:28.279 "r_mbytes_per_sec": 0, 00:22:28.279 "w_mbytes_per_sec": 0 00:22:28.279 }, 00:22:28.279 "claimed": false, 00:22:28.279 "zoned": false, 00:22:28.279 "supported_io_types": { 00:22:28.279 "read": true, 00:22:28.279 "write": true, 00:22:28.279 "unmap": true, 00:22:28.279 "flush": true, 00:22:28.279 "reset": true, 00:22:28.279 "nvme_admin": false, 00:22:28.279 "nvme_io": false, 00:22:28.279 "nvme_io_md": false, 00:22:28.279 "write_zeroes": true, 00:22:28.279 "zcopy": true, 00:22:28.279 "get_zone_info": false, 00:22:28.279 "zone_management": false, 00:22:28.279 "zone_append": false, 00:22:28.279 "compare": false, 00:22:28.279 "compare_and_write": false, 00:22:28.279 "abort": true, 00:22:28.279 "seek_hole": false, 00:22:28.279 "seek_data": false, 00:22:28.279 "copy": true, 00:22:28.279 "nvme_iov_md": false 00:22:28.279 }, 00:22:28.279 "memory_domains": [ 00:22:28.279 { 00:22:28.279 "dma_device_id": "system", 00:22:28.279 "dma_device_type": 1 00:22:28.279 }, 00:22:28.279 { 00:22:28.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.279 "dma_device_type": 2 00:22:28.279 } 00:22:28.279 ], 00:22:28.279 "driver_specific": {} 00:22:28.279 } 00:22:28.279 ] 00:22:28.279 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.279 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:28.279 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:28.279 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:28.279 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:28.279 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.280 [2024-11-20 07:19:52.294388] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:28.280 [2024-11-20 07:19:52.294441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:28.280 [2024-11-20 07:19:52.294472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:28.280 [2024-11-20 07:19:52.296946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.280 "name": "Existed_Raid", 00:22:28.280 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:28.280 "strip_size_kb": 64, 00:22:28.280 "state": "configuring", 00:22:28.280 "raid_level": "raid0", 00:22:28.280 "superblock": true, 00:22:28.280 "num_base_bdevs": 3, 00:22:28.280 "num_base_bdevs_discovered": 2, 00:22:28.280 "num_base_bdevs_operational": 3, 00:22:28.280 "base_bdevs_list": [ 00:22:28.280 { 00:22:28.280 "name": "BaseBdev1", 00:22:28.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.280 "is_configured": false, 00:22:28.280 "data_offset": 0, 00:22:28.280 "data_size": 0 00:22:28.280 }, 00:22:28.280 { 00:22:28.280 "name": "BaseBdev2", 00:22:28.280 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:28.280 "is_configured": true, 00:22:28.280 "data_offset": 2048, 00:22:28.280 "data_size": 63488 00:22:28.280 }, 00:22:28.280 { 00:22:28.280 "name": "BaseBdev3", 00:22:28.280 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:28.280 "is_configured": true, 00:22:28.280 "data_offset": 2048, 00:22:28.280 "data_size": 63488 00:22:28.280 } 00:22:28.280 ] 00:22:28.280 }' 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.280 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.586 [2024-11-20 07:19:52.826562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.586 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.844 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.844 "name": "Existed_Raid", 00:22:28.844 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:28.844 "strip_size_kb": 64, 00:22:28.844 "state": "configuring", 00:22:28.844 "raid_level": "raid0", 00:22:28.844 "superblock": true, 00:22:28.844 "num_base_bdevs": 3, 00:22:28.844 "num_base_bdevs_discovered": 1, 00:22:28.844 "num_base_bdevs_operational": 3, 00:22:28.844 "base_bdevs_list": [ 00:22:28.844 { 00:22:28.844 "name": "BaseBdev1", 00:22:28.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.844 "is_configured": false, 00:22:28.844 "data_offset": 0, 00:22:28.844 "data_size": 0 00:22:28.844 }, 00:22:28.844 { 00:22:28.844 "name": null, 00:22:28.844 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:28.844 "is_configured": false, 00:22:28.844 "data_offset": 0, 00:22:28.844 "data_size": 63488 00:22:28.844 }, 00:22:28.844 { 00:22:28.844 "name": "BaseBdev3", 00:22:28.844 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:28.844 "is_configured": true, 00:22:28.844 "data_offset": 2048, 00:22:28.844 "data_size": 63488 00:22:28.844 } 00:22:28.844 ] 00:22:28.844 }' 00:22:28.844 07:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.844 07:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.103 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.362 [2024-11-20 07:19:53.406134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.362 BaseBdev1 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.362 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.362 [ 00:22:29.362 { 00:22:29.363 "name": "BaseBdev1", 00:22:29.363 "aliases": [ 00:22:29.363 "ef05e277-9294-4dd6-9b5e-1d19760c7f94" 00:22:29.363 ], 00:22:29.363 "product_name": "Malloc disk", 00:22:29.363 "block_size": 512, 00:22:29.363 "num_blocks": 65536, 00:22:29.363 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:29.363 "assigned_rate_limits": { 00:22:29.363 "rw_ios_per_sec": 0, 00:22:29.363 "rw_mbytes_per_sec": 0, 00:22:29.363 "r_mbytes_per_sec": 0, 00:22:29.363 "w_mbytes_per_sec": 0 00:22:29.363 }, 00:22:29.363 "claimed": true, 00:22:29.363 "claim_type": "exclusive_write", 00:22:29.363 "zoned": false, 00:22:29.363 "supported_io_types": { 00:22:29.363 "read": true, 00:22:29.363 "write": true, 00:22:29.363 "unmap": true, 00:22:29.363 "flush": true, 00:22:29.363 "reset": true, 00:22:29.363 "nvme_admin": false, 00:22:29.363 "nvme_io": false, 00:22:29.363 "nvme_io_md": false, 00:22:29.363 "write_zeroes": true, 00:22:29.363 "zcopy": true, 00:22:29.363 "get_zone_info": false, 00:22:29.363 "zone_management": false, 00:22:29.363 "zone_append": false, 00:22:29.363 "compare": false, 00:22:29.363 "compare_and_write": false, 00:22:29.363 "abort": true, 00:22:29.363 "seek_hole": false, 00:22:29.363 "seek_data": false, 00:22:29.363 "copy": true, 00:22:29.363 "nvme_iov_md": false 00:22:29.363 }, 00:22:29.363 "memory_domains": [ 00:22:29.363 { 00:22:29.363 "dma_device_id": "system", 00:22:29.363 "dma_device_type": 1 00:22:29.363 }, 00:22:29.363 { 00:22:29.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.363 "dma_device_type": 2 00:22:29.363 } 00:22:29.363 ], 00:22:29.363 "driver_specific": {} 00:22:29.363 } 00:22:29.363 ] 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.363 "name": "Existed_Raid", 00:22:29.363 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:29.363 "strip_size_kb": 64, 00:22:29.363 "state": "configuring", 00:22:29.363 "raid_level": "raid0", 00:22:29.363 "superblock": true, 00:22:29.363 "num_base_bdevs": 3, 00:22:29.363 "num_base_bdevs_discovered": 2, 00:22:29.363 "num_base_bdevs_operational": 3, 00:22:29.363 "base_bdevs_list": [ 00:22:29.363 { 00:22:29.363 "name": "BaseBdev1", 00:22:29.363 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:29.363 "is_configured": true, 00:22:29.363 "data_offset": 2048, 00:22:29.363 "data_size": 63488 00:22:29.363 }, 00:22:29.363 { 00:22:29.363 "name": null, 00:22:29.363 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:29.363 "is_configured": false, 00:22:29.363 "data_offset": 0, 00:22:29.363 "data_size": 63488 00:22:29.363 }, 00:22:29.363 { 00:22:29.363 "name": "BaseBdev3", 00:22:29.363 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:29.363 "is_configured": true, 00:22:29.363 "data_offset": 2048, 00:22:29.363 "data_size": 63488 00:22:29.363 } 00:22:29.363 ] 00:22:29.363 }' 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.363 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.931 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:29.931 07:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.931 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.931 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.931 07:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.931 [2024-11-20 07:19:54.006359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.931 "name": "Existed_Raid", 00:22:29.931 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:29.931 "strip_size_kb": 64, 00:22:29.931 "state": "configuring", 00:22:29.931 "raid_level": "raid0", 00:22:29.931 "superblock": true, 00:22:29.931 "num_base_bdevs": 3, 00:22:29.931 "num_base_bdevs_discovered": 1, 00:22:29.931 "num_base_bdevs_operational": 3, 00:22:29.931 "base_bdevs_list": [ 00:22:29.931 { 00:22:29.931 "name": "BaseBdev1", 00:22:29.931 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:29.931 "is_configured": true, 00:22:29.931 "data_offset": 2048, 00:22:29.931 "data_size": 63488 00:22:29.931 }, 00:22:29.931 { 00:22:29.931 "name": null, 00:22:29.931 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:29.931 "is_configured": false, 00:22:29.931 "data_offset": 0, 00:22:29.931 "data_size": 63488 00:22:29.931 }, 00:22:29.931 { 00:22:29.931 "name": null, 00:22:29.931 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:29.931 "is_configured": false, 00:22:29.931 "data_offset": 0, 00:22:29.931 "data_size": 63488 00:22:29.931 } 00:22:29.931 ] 00:22:29.931 }' 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.931 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.497 [2024-11-20 07:19:54.578554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.497 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.498 "name": "Existed_Raid", 00:22:30.498 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:30.498 "strip_size_kb": 64, 00:22:30.498 "state": "configuring", 00:22:30.498 "raid_level": "raid0", 00:22:30.498 "superblock": true, 00:22:30.498 "num_base_bdevs": 3, 00:22:30.498 "num_base_bdevs_discovered": 2, 00:22:30.498 "num_base_bdevs_operational": 3, 00:22:30.498 "base_bdevs_list": [ 00:22:30.498 { 00:22:30.498 "name": "BaseBdev1", 00:22:30.498 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:30.498 "is_configured": true, 00:22:30.498 "data_offset": 2048, 00:22:30.498 "data_size": 63488 00:22:30.498 }, 00:22:30.498 { 00:22:30.498 "name": null, 00:22:30.498 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:30.498 "is_configured": false, 00:22:30.498 "data_offset": 0, 00:22:30.498 "data_size": 63488 00:22:30.498 }, 00:22:30.498 { 00:22:30.498 "name": "BaseBdev3", 00:22:30.498 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:30.498 "is_configured": true, 00:22:30.498 "data_offset": 2048, 00:22:30.498 "data_size": 63488 00:22:30.498 } 00:22:30.498 ] 00:22:30.498 }' 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.498 07:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.063 [2024-11-20 07:19:55.170863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.063 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.063 "name": "Existed_Raid", 00:22:31.063 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:31.063 "strip_size_kb": 64, 00:22:31.063 "state": "configuring", 00:22:31.063 "raid_level": "raid0", 00:22:31.063 "superblock": true, 00:22:31.063 "num_base_bdevs": 3, 00:22:31.063 "num_base_bdevs_discovered": 1, 00:22:31.063 "num_base_bdevs_operational": 3, 00:22:31.063 "base_bdevs_list": [ 00:22:31.063 { 00:22:31.063 "name": null, 00:22:31.063 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:31.063 "is_configured": false, 00:22:31.063 "data_offset": 0, 00:22:31.063 "data_size": 63488 00:22:31.063 }, 00:22:31.063 { 00:22:31.063 "name": null, 00:22:31.063 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:31.063 "is_configured": false, 00:22:31.063 "data_offset": 0, 00:22:31.063 "data_size": 63488 00:22:31.063 }, 00:22:31.063 { 00:22:31.063 "name": "BaseBdev3", 00:22:31.063 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:31.063 "is_configured": true, 00:22:31.063 "data_offset": 2048, 00:22:31.064 "data_size": 63488 00:22:31.064 } 00:22:31.064 ] 00:22:31.064 }' 00:22:31.064 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.064 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.631 [2024-11-20 07:19:55.853030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.631 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.631 "name": "Existed_Raid", 00:22:31.631 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:31.631 "strip_size_kb": 64, 00:22:31.631 "state": "configuring", 00:22:31.631 "raid_level": "raid0", 00:22:31.631 "superblock": true, 00:22:31.631 "num_base_bdevs": 3, 00:22:31.631 "num_base_bdevs_discovered": 2, 00:22:31.631 "num_base_bdevs_operational": 3, 00:22:31.631 "base_bdevs_list": [ 00:22:31.631 { 00:22:31.631 "name": null, 00:22:31.631 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:31.631 "is_configured": false, 00:22:31.631 "data_offset": 0, 00:22:31.631 "data_size": 63488 00:22:31.631 }, 00:22:31.631 { 00:22:31.631 "name": "BaseBdev2", 00:22:31.631 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:31.632 "is_configured": true, 00:22:31.632 "data_offset": 2048, 00:22:31.632 "data_size": 63488 00:22:31.632 }, 00:22:31.632 { 00:22:31.632 "name": "BaseBdev3", 00:22:31.632 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:31.632 "is_configured": true, 00:22:31.632 "data_offset": 2048, 00:22:31.632 "data_size": 63488 00:22:31.632 } 00:22:31.632 ] 00:22:31.632 }' 00:22:31.632 07:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.632 07:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef05e277-9294-4dd6-9b5e-1d19760c7f94 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.199 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.458 [2024-11-20 07:19:56.519651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:32.458 [2024-11-20 07:19:56.520005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:32.458 [2024-11-20 07:19:56.520031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:32.458 [2024-11-20 07:19:56.520338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:32.458 NewBaseBdev 00:22:32.458 [2024-11-20 07:19:56.520541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:32.458 [2024-11-20 07:19:56.520560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:32.458 [2024-11-20 07:19:56.520749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.458 [ 00:22:32.458 { 00:22:32.458 "name": "NewBaseBdev", 00:22:32.458 "aliases": [ 00:22:32.458 "ef05e277-9294-4dd6-9b5e-1d19760c7f94" 00:22:32.458 ], 00:22:32.458 "product_name": "Malloc disk", 00:22:32.458 "block_size": 512, 00:22:32.458 "num_blocks": 65536, 00:22:32.458 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:32.458 "assigned_rate_limits": { 00:22:32.458 "rw_ios_per_sec": 0, 00:22:32.458 "rw_mbytes_per_sec": 0, 00:22:32.458 "r_mbytes_per_sec": 0, 00:22:32.458 "w_mbytes_per_sec": 0 00:22:32.458 }, 00:22:32.458 "claimed": true, 00:22:32.458 "claim_type": "exclusive_write", 00:22:32.458 "zoned": false, 00:22:32.458 "supported_io_types": { 00:22:32.458 "read": true, 00:22:32.458 "write": true, 00:22:32.458 "unmap": true, 00:22:32.458 "flush": true, 00:22:32.458 "reset": true, 00:22:32.458 "nvme_admin": false, 00:22:32.458 "nvme_io": false, 00:22:32.458 "nvme_io_md": false, 00:22:32.458 "write_zeroes": true, 00:22:32.458 "zcopy": true, 00:22:32.458 "get_zone_info": false, 00:22:32.458 "zone_management": false, 00:22:32.458 "zone_append": false, 00:22:32.458 "compare": false, 00:22:32.458 "compare_and_write": false, 00:22:32.458 "abort": true, 00:22:32.458 "seek_hole": false, 00:22:32.458 "seek_data": false, 00:22:32.458 "copy": true, 00:22:32.458 "nvme_iov_md": false 00:22:32.458 }, 00:22:32.458 "memory_domains": [ 00:22:32.458 { 00:22:32.458 "dma_device_id": "system", 00:22:32.458 "dma_device_type": 1 00:22:32.458 }, 00:22:32.458 { 00:22:32.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.458 "dma_device_type": 2 00:22:32.458 } 00:22:32.458 ], 00:22:32.458 "driver_specific": {} 00:22:32.458 } 00:22:32.458 ] 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.458 "name": "Existed_Raid", 00:22:32.458 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:32.458 "strip_size_kb": 64, 00:22:32.458 "state": "online", 00:22:32.458 "raid_level": "raid0", 00:22:32.458 "superblock": true, 00:22:32.458 "num_base_bdevs": 3, 00:22:32.458 "num_base_bdevs_discovered": 3, 00:22:32.458 "num_base_bdevs_operational": 3, 00:22:32.458 "base_bdevs_list": [ 00:22:32.458 { 00:22:32.458 "name": "NewBaseBdev", 00:22:32.458 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:32.458 "is_configured": true, 00:22:32.458 "data_offset": 2048, 00:22:32.458 "data_size": 63488 00:22:32.458 }, 00:22:32.458 { 00:22:32.458 "name": "BaseBdev2", 00:22:32.458 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:32.458 "is_configured": true, 00:22:32.458 "data_offset": 2048, 00:22:32.458 "data_size": 63488 00:22:32.458 }, 00:22:32.458 { 00:22:32.458 "name": "BaseBdev3", 00:22:32.458 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:32.458 "is_configured": true, 00:22:32.458 "data_offset": 2048, 00:22:32.458 "data_size": 63488 00:22:32.458 } 00:22:32.458 ] 00:22:32.458 }' 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.458 07:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.026 [2024-11-20 07:19:57.076305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:33.026 "name": "Existed_Raid", 00:22:33.026 "aliases": [ 00:22:33.026 "42e09269-9412-4693-b7e1-476530b48db5" 00:22:33.026 ], 00:22:33.026 "product_name": "Raid Volume", 00:22:33.026 "block_size": 512, 00:22:33.026 "num_blocks": 190464, 00:22:33.026 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:33.026 "assigned_rate_limits": { 00:22:33.026 "rw_ios_per_sec": 0, 00:22:33.026 "rw_mbytes_per_sec": 0, 00:22:33.026 "r_mbytes_per_sec": 0, 00:22:33.026 "w_mbytes_per_sec": 0 00:22:33.026 }, 00:22:33.026 "claimed": false, 00:22:33.026 "zoned": false, 00:22:33.026 "supported_io_types": { 00:22:33.026 "read": true, 00:22:33.026 "write": true, 00:22:33.026 "unmap": true, 00:22:33.026 "flush": true, 00:22:33.026 "reset": true, 00:22:33.026 "nvme_admin": false, 00:22:33.026 "nvme_io": false, 00:22:33.026 "nvme_io_md": false, 00:22:33.026 "write_zeroes": true, 00:22:33.026 "zcopy": false, 00:22:33.026 "get_zone_info": false, 00:22:33.026 "zone_management": false, 00:22:33.026 "zone_append": false, 00:22:33.026 "compare": false, 00:22:33.026 "compare_and_write": false, 00:22:33.026 "abort": false, 00:22:33.026 "seek_hole": false, 00:22:33.026 "seek_data": false, 00:22:33.026 "copy": false, 00:22:33.026 "nvme_iov_md": false 00:22:33.026 }, 00:22:33.026 "memory_domains": [ 00:22:33.026 { 00:22:33.026 "dma_device_id": "system", 00:22:33.026 "dma_device_type": 1 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.026 "dma_device_type": 2 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "dma_device_id": "system", 00:22:33.026 "dma_device_type": 1 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.026 "dma_device_type": 2 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "dma_device_id": "system", 00:22:33.026 "dma_device_type": 1 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.026 "dma_device_type": 2 00:22:33.026 } 00:22:33.026 ], 00:22:33.026 "driver_specific": { 00:22:33.026 "raid": { 00:22:33.026 "uuid": "42e09269-9412-4693-b7e1-476530b48db5", 00:22:33.026 "strip_size_kb": 64, 00:22:33.026 "state": "online", 00:22:33.026 "raid_level": "raid0", 00:22:33.026 "superblock": true, 00:22:33.026 "num_base_bdevs": 3, 00:22:33.026 "num_base_bdevs_discovered": 3, 00:22:33.026 "num_base_bdevs_operational": 3, 00:22:33.026 "base_bdevs_list": [ 00:22:33.026 { 00:22:33.026 "name": "NewBaseBdev", 00:22:33.026 "uuid": "ef05e277-9294-4dd6-9b5e-1d19760c7f94", 00:22:33.026 "is_configured": true, 00:22:33.026 "data_offset": 2048, 00:22:33.026 "data_size": 63488 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "name": "BaseBdev2", 00:22:33.026 "uuid": "32eb8497-a8b8-48c4-be88-a59ec1a4271b", 00:22:33.026 "is_configured": true, 00:22:33.026 "data_offset": 2048, 00:22:33.026 "data_size": 63488 00:22:33.026 }, 00:22:33.026 { 00:22:33.026 "name": "BaseBdev3", 00:22:33.026 "uuid": "bb0fbab6-d5cc-4c3c-9000-7fdf04226efa", 00:22:33.026 "is_configured": true, 00:22:33.026 "data_offset": 2048, 00:22:33.026 "data_size": 63488 00:22:33.026 } 00:22:33.026 ] 00:22:33.026 } 00:22:33.026 } 00:22:33.026 }' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:33.026 BaseBdev2 00:22:33.026 BaseBdev3' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:33.026 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.374 [2024-11-20 07:19:57.412055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:33.374 [2024-11-20 07:19:57.412094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.374 [2024-11-20 07:19:57.412213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.374 [2024-11-20 07:19:57.412304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.374 [2024-11-20 07:19:57.412325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64654 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64654 ']' 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64654 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64654 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.374 killing process with pid 64654 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64654' 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64654 00:22:33.374 [2024-11-20 07:19:57.455110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.374 07:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64654 00:22:33.633 [2024-11-20 07:19:57.725534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:34.570 07:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:34.570 00:22:34.570 real 0m11.768s 00:22:34.570 user 0m19.519s 00:22:34.570 sys 0m1.618s 00:22:34.570 07:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.570 07:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.570 ************************************ 00:22:34.570 END TEST raid_state_function_test_sb 00:22:34.570 ************************************ 00:22:34.570 07:19:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:22:34.570 07:19:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:34.570 07:19:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.570 07:19:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.570 ************************************ 00:22:34.570 START TEST raid_superblock_test 00:22:34.570 ************************************ 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65285 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65285 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65285 ']' 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.570 07:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.828 [2024-11-20 07:19:58.910394] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:34.828 [2024-11-20 07:19:58.910559] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65285 ] 00:22:34.828 [2024-11-20 07:19:59.091074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.085 [2024-11-20 07:19:59.251296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.344 [2024-11-20 07:19:59.467815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.344 [2024-11-20 07:19:59.467906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.913 07:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.913 malloc1 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.913 [2024-11-20 07:20:00.009349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:35.913 [2024-11-20 07:20:00.009439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.913 [2024-11-20 07:20:00.009474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:35.913 [2024-11-20 07:20:00.009490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.913 [2024-11-20 07:20:00.012664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.913 [2024-11-20 07:20:00.012740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:35.913 pt1 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.913 malloc2 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.913 [2024-11-20 07:20:00.066162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:35.913 [2024-11-20 07:20:00.066225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.913 [2024-11-20 07:20:00.066254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:35.913 [2024-11-20 07:20:00.066268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.913 [2024-11-20 07:20:00.069378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.913 [2024-11-20 07:20:00.069419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:35.913 pt2 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.913 malloc3 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.913 [2024-11-20 07:20:00.140566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:35.913 [2024-11-20 07:20:00.140639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.913 [2024-11-20 07:20:00.140672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:35.913 [2024-11-20 07:20:00.140688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.913 [2024-11-20 07:20:00.143776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.913 [2024-11-20 07:20:00.143817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:35.913 pt3 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:35.913 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.914 [2024-11-20 07:20:00.152724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:35.914 [2024-11-20 07:20:00.155242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:35.914 [2024-11-20 07:20:00.155350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:35.914 [2024-11-20 07:20:00.155589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:35.914 [2024-11-20 07:20:00.155647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:35.914 [2024-11-20 07:20:00.155982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:35.914 [2024-11-20 07:20:00.156205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:35.914 [2024-11-20 07:20:00.156232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:35.914 [2024-11-20 07:20:00.156421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.914 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.172 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.172 "name": "raid_bdev1", 00:22:36.173 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:36.173 "strip_size_kb": 64, 00:22:36.173 "state": "online", 00:22:36.173 "raid_level": "raid0", 00:22:36.173 "superblock": true, 00:22:36.173 "num_base_bdevs": 3, 00:22:36.173 "num_base_bdevs_discovered": 3, 00:22:36.173 "num_base_bdevs_operational": 3, 00:22:36.173 "base_bdevs_list": [ 00:22:36.173 { 00:22:36.173 "name": "pt1", 00:22:36.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:36.173 "is_configured": true, 00:22:36.173 "data_offset": 2048, 00:22:36.173 "data_size": 63488 00:22:36.173 }, 00:22:36.173 { 00:22:36.173 "name": "pt2", 00:22:36.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.173 "is_configured": true, 00:22:36.173 "data_offset": 2048, 00:22:36.173 "data_size": 63488 00:22:36.173 }, 00:22:36.173 { 00:22:36.173 "name": "pt3", 00:22:36.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:36.173 "is_configured": true, 00:22:36.173 "data_offset": 2048, 00:22:36.173 "data_size": 63488 00:22:36.173 } 00:22:36.173 ] 00:22:36.173 }' 00:22:36.173 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.173 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.432 [2024-11-20 07:20:00.669314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.432 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:36.432 "name": "raid_bdev1", 00:22:36.432 "aliases": [ 00:22:36.432 "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278" 00:22:36.432 ], 00:22:36.432 "product_name": "Raid Volume", 00:22:36.432 "block_size": 512, 00:22:36.432 "num_blocks": 190464, 00:22:36.432 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:36.432 "assigned_rate_limits": { 00:22:36.432 "rw_ios_per_sec": 0, 00:22:36.432 "rw_mbytes_per_sec": 0, 00:22:36.432 "r_mbytes_per_sec": 0, 00:22:36.432 "w_mbytes_per_sec": 0 00:22:36.432 }, 00:22:36.432 "claimed": false, 00:22:36.432 "zoned": false, 00:22:36.432 "supported_io_types": { 00:22:36.432 "read": true, 00:22:36.432 "write": true, 00:22:36.432 "unmap": true, 00:22:36.432 "flush": true, 00:22:36.432 "reset": true, 00:22:36.432 "nvme_admin": false, 00:22:36.432 "nvme_io": false, 00:22:36.432 "nvme_io_md": false, 00:22:36.432 "write_zeroes": true, 00:22:36.432 "zcopy": false, 00:22:36.432 "get_zone_info": false, 00:22:36.432 "zone_management": false, 00:22:36.432 "zone_append": false, 00:22:36.432 "compare": false, 00:22:36.432 "compare_and_write": false, 00:22:36.432 "abort": false, 00:22:36.432 "seek_hole": false, 00:22:36.432 "seek_data": false, 00:22:36.432 "copy": false, 00:22:36.433 "nvme_iov_md": false 00:22:36.433 }, 00:22:36.433 "memory_domains": [ 00:22:36.433 { 00:22:36.433 "dma_device_id": "system", 00:22:36.433 "dma_device_type": 1 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.433 "dma_device_type": 2 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "dma_device_id": "system", 00:22:36.433 "dma_device_type": 1 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.433 "dma_device_type": 2 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "dma_device_id": "system", 00:22:36.433 "dma_device_type": 1 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.433 "dma_device_type": 2 00:22:36.433 } 00:22:36.433 ], 00:22:36.433 "driver_specific": { 00:22:36.433 "raid": { 00:22:36.433 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:36.433 "strip_size_kb": 64, 00:22:36.433 "state": "online", 00:22:36.433 "raid_level": "raid0", 00:22:36.433 "superblock": true, 00:22:36.433 "num_base_bdevs": 3, 00:22:36.433 "num_base_bdevs_discovered": 3, 00:22:36.433 "num_base_bdevs_operational": 3, 00:22:36.433 "base_bdevs_list": [ 00:22:36.433 { 00:22:36.433 "name": "pt1", 00:22:36.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:36.433 "is_configured": true, 00:22:36.433 "data_offset": 2048, 00:22:36.433 "data_size": 63488 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "name": "pt2", 00:22:36.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.433 "is_configured": true, 00:22:36.433 "data_offset": 2048, 00:22:36.433 "data_size": 63488 00:22:36.433 }, 00:22:36.433 { 00:22:36.433 "name": "pt3", 00:22:36.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:36.433 "is_configured": true, 00:22:36.433 "data_offset": 2048, 00:22:36.433 "data_size": 63488 00:22:36.433 } 00:22:36.433 ] 00:22:36.433 } 00:22:36.433 } 00:22:36.433 }' 00:22:36.433 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:36.692 pt2 00:22:36.692 pt3' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.692 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.693 07:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.952 [2024-11-20 07:20:00.981293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1636c7dd-d5b0-4aab-b51f-bb10a5ce8278 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1636c7dd-d5b0-4aab-b51f-bb10a5ce8278 ']' 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.952 [2024-11-20 07:20:01.028924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.952 [2024-11-20 07:20:01.028962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.952 [2024-11-20 07:20:01.029079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.952 [2024-11-20 07:20:01.029207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.952 [2024-11-20 07:20:01.029239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.952 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 [2024-11-20 07:20:01.181019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:36.953 [2024-11-20 07:20:01.183612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:36.953 [2024-11-20 07:20:01.183704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:36.953 [2024-11-20 07:20:01.183781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:36.953 [2024-11-20 07:20:01.183849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:36.953 [2024-11-20 07:20:01.183883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:36.953 [2024-11-20 07:20:01.183911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.953 [2024-11-20 07:20:01.183927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:36.953 request: 00:22:36.953 { 00:22:36.953 "name": "raid_bdev1", 00:22:36.953 "raid_level": "raid0", 00:22:36.953 "base_bdevs": [ 00:22:36.953 "malloc1", 00:22:36.953 "malloc2", 00:22:36.953 "malloc3" 00:22:36.953 ], 00:22:36.953 "strip_size_kb": 64, 00:22:36.953 "superblock": false, 00:22:36.953 "method": "bdev_raid_create", 00:22:36.953 "req_id": 1 00:22:36.953 } 00:22:36.953 Got JSON-RPC error response 00:22:36.953 response: 00:22:36.953 { 00:22:36.953 "code": -17, 00:22:36.953 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:36.953 } 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.953 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.212 [2024-11-20 07:20:01.244998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:37.212 [2024-11-20 07:20:01.245073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.212 [2024-11-20 07:20:01.245100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:37.212 [2024-11-20 07:20:01.245115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.212 [2024-11-20 07:20:01.248222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.212 [2024-11-20 07:20:01.248262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:37.212 [2024-11-20 07:20:01.248358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:37.212 [2024-11-20 07:20:01.248433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:37.212 pt1 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.212 "name": "raid_bdev1", 00:22:37.212 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:37.212 "strip_size_kb": 64, 00:22:37.212 "state": "configuring", 00:22:37.212 "raid_level": "raid0", 00:22:37.212 "superblock": true, 00:22:37.212 "num_base_bdevs": 3, 00:22:37.212 "num_base_bdevs_discovered": 1, 00:22:37.212 "num_base_bdevs_operational": 3, 00:22:37.212 "base_bdevs_list": [ 00:22:37.212 { 00:22:37.212 "name": "pt1", 00:22:37.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:37.212 "is_configured": true, 00:22:37.212 "data_offset": 2048, 00:22:37.212 "data_size": 63488 00:22:37.212 }, 00:22:37.212 { 00:22:37.212 "name": null, 00:22:37.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:37.212 "is_configured": false, 00:22:37.212 "data_offset": 2048, 00:22:37.212 "data_size": 63488 00:22:37.212 }, 00:22:37.212 { 00:22:37.212 "name": null, 00:22:37.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:37.212 "is_configured": false, 00:22:37.212 "data_offset": 2048, 00:22:37.212 "data_size": 63488 00:22:37.212 } 00:22:37.212 ] 00:22:37.212 }' 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.212 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.470 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:37.470 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.470 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.470 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.729 [2024-11-20 07:20:01.761200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:37.729 [2024-11-20 07:20:01.761287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.729 [2024-11-20 07:20:01.761320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:37.729 [2024-11-20 07:20:01.761335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.729 [2024-11-20 07:20:01.761913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.729 [2024-11-20 07:20:01.761944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:37.729 [2024-11-20 07:20:01.762055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:37.729 [2024-11-20 07:20:01.762103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.729 pt2 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.729 [2024-11-20 07:20:01.769147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:37.729 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.730 "name": "raid_bdev1", 00:22:37.730 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:37.730 "strip_size_kb": 64, 00:22:37.730 "state": "configuring", 00:22:37.730 "raid_level": "raid0", 00:22:37.730 "superblock": true, 00:22:37.730 "num_base_bdevs": 3, 00:22:37.730 "num_base_bdevs_discovered": 1, 00:22:37.730 "num_base_bdevs_operational": 3, 00:22:37.730 "base_bdevs_list": [ 00:22:37.730 { 00:22:37.730 "name": "pt1", 00:22:37.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:37.730 "is_configured": true, 00:22:37.730 "data_offset": 2048, 00:22:37.730 "data_size": 63488 00:22:37.730 }, 00:22:37.730 { 00:22:37.730 "name": null, 00:22:37.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:37.730 "is_configured": false, 00:22:37.730 "data_offset": 0, 00:22:37.730 "data_size": 63488 00:22:37.730 }, 00:22:37.730 { 00:22:37.730 "name": null, 00:22:37.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:37.730 "is_configured": false, 00:22:37.730 "data_offset": 2048, 00:22:37.730 "data_size": 63488 00:22:37.730 } 00:22:37.730 ] 00:22:37.730 }' 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.730 07:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.989 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:37.989 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:37.989 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.989 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 [2024-11-20 07:20:02.281347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:38.250 [2024-11-20 07:20:02.281472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.250 [2024-11-20 07:20:02.281501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:38.250 [2024-11-20 07:20:02.281519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.250 [2024-11-20 07:20:02.282171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.250 [2024-11-20 07:20:02.282210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:38.250 [2024-11-20 07:20:02.282336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:38.250 [2024-11-20 07:20:02.282381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:38.250 pt2 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 [2024-11-20 07:20:02.293360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:38.250 [2024-11-20 07:20:02.293475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.250 [2024-11-20 07:20:02.293499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:38.250 [2024-11-20 07:20:02.293515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.250 [2024-11-20 07:20:02.294123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.250 [2024-11-20 07:20:02.294171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:38.250 [2024-11-20 07:20:02.294286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:38.250 [2024-11-20 07:20:02.294324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:38.250 [2024-11-20 07:20:02.294484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:38.250 [2024-11-20 07:20:02.294511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:38.250 [2024-11-20 07:20:02.294869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:38.250 [2024-11-20 07:20:02.295126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:38.250 [2024-11-20 07:20:02.295150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:38.250 [2024-11-20 07:20:02.295336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.250 pt3 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.250 "name": "raid_bdev1", 00:22:38.250 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:38.250 "strip_size_kb": 64, 00:22:38.250 "state": "online", 00:22:38.250 "raid_level": "raid0", 00:22:38.250 "superblock": true, 00:22:38.250 "num_base_bdevs": 3, 00:22:38.250 "num_base_bdevs_discovered": 3, 00:22:38.250 "num_base_bdevs_operational": 3, 00:22:38.250 "base_bdevs_list": [ 00:22:38.250 { 00:22:38.250 "name": "pt1", 00:22:38.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:38.250 "is_configured": true, 00:22:38.250 "data_offset": 2048, 00:22:38.250 "data_size": 63488 00:22:38.250 }, 00:22:38.250 { 00:22:38.250 "name": "pt2", 00:22:38.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.250 "is_configured": true, 00:22:38.250 "data_offset": 2048, 00:22:38.250 "data_size": 63488 00:22:38.250 }, 00:22:38.250 { 00:22:38.250 "name": "pt3", 00:22:38.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.250 "is_configured": true, 00:22:38.250 "data_offset": 2048, 00:22:38.250 "data_size": 63488 00:22:38.250 } 00:22:38.250 ] 00:22:38.250 }' 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.250 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.823 [2024-11-20 07:20:02.837928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.823 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:38.823 "name": "raid_bdev1", 00:22:38.823 "aliases": [ 00:22:38.823 "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278" 00:22:38.823 ], 00:22:38.823 "product_name": "Raid Volume", 00:22:38.823 "block_size": 512, 00:22:38.823 "num_blocks": 190464, 00:22:38.823 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:38.823 "assigned_rate_limits": { 00:22:38.823 "rw_ios_per_sec": 0, 00:22:38.823 "rw_mbytes_per_sec": 0, 00:22:38.823 "r_mbytes_per_sec": 0, 00:22:38.823 "w_mbytes_per_sec": 0 00:22:38.823 }, 00:22:38.823 "claimed": false, 00:22:38.823 "zoned": false, 00:22:38.823 "supported_io_types": { 00:22:38.823 "read": true, 00:22:38.823 "write": true, 00:22:38.823 "unmap": true, 00:22:38.823 "flush": true, 00:22:38.823 "reset": true, 00:22:38.823 "nvme_admin": false, 00:22:38.823 "nvme_io": false, 00:22:38.823 "nvme_io_md": false, 00:22:38.823 "write_zeroes": true, 00:22:38.823 "zcopy": false, 00:22:38.823 "get_zone_info": false, 00:22:38.823 "zone_management": false, 00:22:38.823 "zone_append": false, 00:22:38.823 "compare": false, 00:22:38.823 "compare_and_write": false, 00:22:38.823 "abort": false, 00:22:38.823 "seek_hole": false, 00:22:38.823 "seek_data": false, 00:22:38.823 "copy": false, 00:22:38.823 "nvme_iov_md": false 00:22:38.823 }, 00:22:38.823 "memory_domains": [ 00:22:38.823 { 00:22:38.823 "dma_device_id": "system", 00:22:38.823 "dma_device_type": 1 00:22:38.823 }, 00:22:38.823 { 00:22:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.823 "dma_device_type": 2 00:22:38.823 }, 00:22:38.823 { 00:22:38.823 "dma_device_id": "system", 00:22:38.823 "dma_device_type": 1 00:22:38.823 }, 00:22:38.823 { 00:22:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.823 "dma_device_type": 2 00:22:38.823 }, 00:22:38.823 { 00:22:38.823 "dma_device_id": "system", 00:22:38.823 "dma_device_type": 1 00:22:38.823 }, 00:22:38.823 { 00:22:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.823 "dma_device_type": 2 00:22:38.823 } 00:22:38.823 ], 00:22:38.823 "driver_specific": { 00:22:38.823 "raid": { 00:22:38.823 "uuid": "1636c7dd-d5b0-4aab-b51f-bb10a5ce8278", 00:22:38.823 "strip_size_kb": 64, 00:22:38.823 "state": "online", 00:22:38.823 "raid_level": "raid0", 00:22:38.823 "superblock": true, 00:22:38.823 "num_base_bdevs": 3, 00:22:38.823 "num_base_bdevs_discovered": 3, 00:22:38.823 "num_base_bdevs_operational": 3, 00:22:38.823 "base_bdevs_list": [ 00:22:38.823 { 00:22:38.823 "name": "pt1", 00:22:38.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:38.824 "is_configured": true, 00:22:38.824 "data_offset": 2048, 00:22:38.824 "data_size": 63488 00:22:38.824 }, 00:22:38.824 { 00:22:38.824 "name": "pt2", 00:22:38.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.824 "is_configured": true, 00:22:38.824 "data_offset": 2048, 00:22:38.824 "data_size": 63488 00:22:38.824 }, 00:22:38.824 { 00:22:38.824 "name": "pt3", 00:22:38.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:38.824 "is_configured": true, 00:22:38.824 "data_offset": 2048, 00:22:38.824 "data_size": 63488 00:22:38.824 } 00:22:38.824 ] 00:22:38.824 } 00:22:38.824 } 00:22:38.824 }' 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:38.824 pt2 00:22:38.824 pt3' 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.824 07:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.824 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:39.083 [2024-11-20 07:20:03.157944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1636c7dd-d5b0-4aab-b51f-bb10a5ce8278 '!=' 1636c7dd-d5b0-4aab-b51f-bb10a5ce8278 ']' 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65285 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65285 ']' 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65285 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65285 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.083 killing process with pid 65285 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65285' 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65285 00:22:39.083 07:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65285 00:22:39.083 [2024-11-20 07:20:03.239769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.083 [2024-11-20 07:20:03.239896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.083 [2024-11-20 07:20:03.239990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.083 [2024-11-20 07:20:03.240012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:39.342 [2024-11-20 07:20:03.512603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.720 07:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:40.720 00:22:40.720 real 0m5.771s 00:22:40.720 user 0m8.714s 00:22:40.720 sys 0m0.828s 00:22:40.720 07:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.720 07:20:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.720 ************************************ 00:22:40.720 END TEST raid_superblock_test 00:22:40.720 ************************************ 00:22:40.720 07:20:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:22:40.720 07:20:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:40.720 07:20:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.720 07:20:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.720 ************************************ 00:22:40.720 START TEST raid_read_error_test 00:22:40.720 ************************************ 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:40.720 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iNOFbSaEns 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65544 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65544 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65544 ']' 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.721 07:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.721 [2024-11-20 07:20:04.749951] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:40.721 [2024-11-20 07:20:04.750128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65544 ] 00:22:40.721 [2024-11-20 07:20:04.936719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.979 [2024-11-20 07:20:05.100321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.237 [2024-11-20 07:20:05.324023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.237 [2024-11-20 07:20:05.324075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.804 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.804 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:41.804 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:41.804 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:41.804 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.804 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.804 BaseBdev1_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 true 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 [2024-11-20 07:20:05.873619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:41.805 [2024-11-20 07:20:05.873720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.805 [2024-11-20 07:20:05.873750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:41.805 [2024-11-20 07:20:05.873770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.805 [2024-11-20 07:20:05.876587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.805 [2024-11-20 07:20:05.876657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:41.805 BaseBdev1 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 BaseBdev2_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 true 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 [2024-11-20 07:20:05.936344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:41.805 [2024-11-20 07:20:05.936425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.805 [2024-11-20 07:20:05.936464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:41.805 [2024-11-20 07:20:05.936482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.805 [2024-11-20 07:20:05.939559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.805 [2024-11-20 07:20:05.939628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:41.805 BaseBdev2 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 BaseBdev3_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 true 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 [2024-11-20 07:20:06.011789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:41.805 [2024-11-20 07:20:06.011876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.805 [2024-11-20 07:20:06.011903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:41.805 [2024-11-20 07:20:06.011921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.805 [2024-11-20 07:20:06.014668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.805 [2024-11-20 07:20:06.014712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:41.805 BaseBdev3 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 [2024-11-20 07:20:06.019906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.805 [2024-11-20 07:20:06.022556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:41.805 [2024-11-20 07:20:06.022904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:41.805 [2024-11-20 07:20:06.023187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:41.805 [2024-11-20 07:20:06.023209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:41.805 [2024-11-20 07:20:06.023582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:22:41.805 [2024-11-20 07:20:06.023839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:41.805 [2024-11-20 07:20:06.023878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:41.805 [2024-11-20 07:20:06.024105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.805 "name": "raid_bdev1", 00:22:41.805 "uuid": "12290a3c-1bc6-47a1-9840-477b2bd2fc00", 00:22:41.805 "strip_size_kb": 64, 00:22:41.805 "state": "online", 00:22:41.805 "raid_level": "raid0", 00:22:41.805 "superblock": true, 00:22:41.805 "num_base_bdevs": 3, 00:22:41.805 "num_base_bdevs_discovered": 3, 00:22:41.805 "num_base_bdevs_operational": 3, 00:22:41.805 "base_bdevs_list": [ 00:22:41.805 { 00:22:41.805 "name": "BaseBdev1", 00:22:41.805 "uuid": "3e96ef94-757f-5de5-a38a-e371295eadbf", 00:22:41.805 "is_configured": true, 00:22:41.805 "data_offset": 2048, 00:22:41.805 "data_size": 63488 00:22:41.805 }, 00:22:41.805 { 00:22:41.805 "name": "BaseBdev2", 00:22:41.805 "uuid": "63d0bc5e-0b97-5713-bef9-eafb7a55dbe5", 00:22:41.805 "is_configured": true, 00:22:41.805 "data_offset": 2048, 00:22:41.805 "data_size": 63488 00:22:41.805 }, 00:22:41.805 { 00:22:41.805 "name": "BaseBdev3", 00:22:41.805 "uuid": "5be6980b-e368-5a10-bd7f-cb95f37e95ff", 00:22:41.805 "is_configured": true, 00:22:41.805 "data_offset": 2048, 00:22:41.805 "data_size": 63488 00:22:41.805 } 00:22:41.805 ] 00:22:41.805 }' 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.805 07:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.373 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:42.373 07:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:42.633 [2024-11-20 07:20:06.673703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.569 "name": "raid_bdev1", 00:22:43.569 "uuid": "12290a3c-1bc6-47a1-9840-477b2bd2fc00", 00:22:43.569 "strip_size_kb": 64, 00:22:43.569 "state": "online", 00:22:43.569 "raid_level": "raid0", 00:22:43.569 "superblock": true, 00:22:43.569 "num_base_bdevs": 3, 00:22:43.569 "num_base_bdevs_discovered": 3, 00:22:43.569 "num_base_bdevs_operational": 3, 00:22:43.569 "base_bdevs_list": [ 00:22:43.569 { 00:22:43.569 "name": "BaseBdev1", 00:22:43.569 "uuid": "3e96ef94-757f-5de5-a38a-e371295eadbf", 00:22:43.569 "is_configured": true, 00:22:43.569 "data_offset": 2048, 00:22:43.569 "data_size": 63488 00:22:43.569 }, 00:22:43.569 { 00:22:43.569 "name": "BaseBdev2", 00:22:43.569 "uuid": "63d0bc5e-0b97-5713-bef9-eafb7a55dbe5", 00:22:43.569 "is_configured": true, 00:22:43.569 "data_offset": 2048, 00:22:43.569 "data_size": 63488 00:22:43.569 }, 00:22:43.569 { 00:22:43.569 "name": "BaseBdev3", 00:22:43.569 "uuid": "5be6980b-e368-5a10-bd7f-cb95f37e95ff", 00:22:43.569 "is_configured": true, 00:22:43.569 "data_offset": 2048, 00:22:43.569 "data_size": 63488 00:22:43.569 } 00:22:43.569 ] 00:22:43.569 }' 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.569 07:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.828 [2024-11-20 07:20:08.079705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.828 [2024-11-20 07:20:08.079873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.828 [2024-11-20 07:20:08.083446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.828 { 00:22:43.828 "results": [ 00:22:43.828 { 00:22:43.828 "job": "raid_bdev1", 00:22:43.828 "core_mask": "0x1", 00:22:43.828 "workload": "randrw", 00:22:43.828 "percentage": 50, 00:22:43.828 "status": "finished", 00:22:43.828 "queue_depth": 1, 00:22:43.828 "io_size": 131072, 00:22:43.828 "runtime": 1.403615, 00:22:43.828 "iops": 10970.244689605055, 00:22:43.828 "mibps": 1371.280586200632, 00:22:43.828 "io_failed": 1, 00:22:43.828 "io_timeout": 0, 00:22:43.828 "avg_latency_us": 127.28916092544381, 00:22:43.828 "min_latency_us": 36.77090909090909, 00:22:43.828 "max_latency_us": 1817.1345454545456 00:22:43.828 } 00:22:43.828 ], 00:22:43.828 "core_count": 1 00:22:43.828 } 00:22:43.828 [2024-11-20 07:20:08.083666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.828 [2024-11-20 07:20:08.083738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.828 [2024-11-20 07:20:08.083755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65544 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65544 ']' 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65544 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.828 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65544 00:22:44.087 killing process with pid 65544 00:22:44.087 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.087 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.087 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65544' 00:22:44.087 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65544 00:22:44.087 [2024-11-20 07:20:08.123201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:44.087 07:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65544 00:22:44.087 [2024-11-20 07:20:08.336709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iNOFbSaEns 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:45.464 ************************************ 00:22:45.464 END TEST raid_read_error_test 00:22:45.464 ************************************ 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:22:45.464 00:22:45.464 real 0m4.799s 00:22:45.464 user 0m6.016s 00:22:45.464 sys 0m0.590s 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.464 07:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.464 07:20:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:22:45.465 07:20:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:45.465 07:20:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.465 07:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 ************************************ 00:22:45.465 START TEST raid_write_error_test 00:22:45.465 ************************************ 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vkSAAD0fkM 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65697 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65697 00:22:45.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65697 ']' 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.465 07:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 [2024-11-20 07:20:09.596813] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:45.465 [2024-11-20 07:20:09.597264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65697 ] 00:22:45.724 [2024-11-20 07:20:09.783776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.724 [2024-11-20 07:20:09.912457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.983 [2024-11-20 07:20:10.118046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.983 [2024-11-20 07:20:10.118124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 BaseBdev1_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 true 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 [2024-11-20 07:20:10.662983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:46.552 [2024-11-20 07:20:10.663283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.552 [2024-11-20 07:20:10.663323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:46.552 [2024-11-20 07:20:10.663343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.552 [2024-11-20 07:20:10.666234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.552 [2024-11-20 07:20:10.666452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:46.552 BaseBdev1 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 BaseBdev2_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 true 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 [2024-11-20 07:20:10.718466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:46.552 [2024-11-20 07:20:10.718549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.552 [2024-11-20 07:20:10.718573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:46.552 [2024-11-20 07:20:10.718589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.552 [2024-11-20 07:20:10.721445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.552 [2024-11-20 07:20:10.721513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:46.552 BaseBdev2 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 BaseBdev3_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 true 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 [2024-11-20 07:20:10.784765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:46.552 [2024-11-20 07:20:10.784849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.552 [2024-11-20 07:20:10.784878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:46.552 [2024-11-20 07:20:10.784895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.552 [2024-11-20 07:20:10.787749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.552 [2024-11-20 07:20:10.787797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:46.552 BaseBdev3 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 [2024-11-20 07:20:10.792893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.552 [2024-11-20 07:20:10.795616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:46.552 [2024-11-20 07:20:10.795744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:46.552 [2024-11-20 07:20:10.796026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:46.552 [2024-11-20 07:20:10.796054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:46.552 [2024-11-20 07:20:10.796371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:22:46.552 [2024-11-20 07:20:10.796643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:46.552 [2024-11-20 07:20:10.796668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:46.552 [2024-11-20 07:20:10.796898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.552 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.553 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.811 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.811 "name": "raid_bdev1", 00:22:46.811 "uuid": "73a4c902-54a9-4f62-9db3-7ed07ef99693", 00:22:46.811 "strip_size_kb": 64, 00:22:46.811 "state": "online", 00:22:46.811 "raid_level": "raid0", 00:22:46.811 "superblock": true, 00:22:46.811 "num_base_bdevs": 3, 00:22:46.811 "num_base_bdevs_discovered": 3, 00:22:46.811 "num_base_bdevs_operational": 3, 00:22:46.811 "base_bdevs_list": [ 00:22:46.811 { 00:22:46.811 "name": "BaseBdev1", 00:22:46.811 "uuid": "f67fa7ba-abe3-5e21-af40-efdce319ba05", 00:22:46.811 "is_configured": true, 00:22:46.811 "data_offset": 2048, 00:22:46.811 "data_size": 63488 00:22:46.811 }, 00:22:46.811 { 00:22:46.811 "name": "BaseBdev2", 00:22:46.811 "uuid": "dd1b29ab-2c89-5946-bc05-51d31ed2b077", 00:22:46.811 "is_configured": true, 00:22:46.811 "data_offset": 2048, 00:22:46.811 "data_size": 63488 00:22:46.811 }, 00:22:46.811 { 00:22:46.811 "name": "BaseBdev3", 00:22:46.811 "uuid": "8257a491-5160-54be-9125-e9c95cd1008b", 00:22:46.811 "is_configured": true, 00:22:46.811 "data_offset": 2048, 00:22:46.811 "data_size": 63488 00:22:46.811 } 00:22:46.811 ] 00:22:46.811 }' 00:22:46.811 07:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.811 07:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.134 07:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:47.134 07:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:47.393 [2024-11-20 07:20:11.438479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.329 "name": "raid_bdev1", 00:22:48.329 "uuid": "73a4c902-54a9-4f62-9db3-7ed07ef99693", 00:22:48.329 "strip_size_kb": 64, 00:22:48.329 "state": "online", 00:22:48.329 "raid_level": "raid0", 00:22:48.329 "superblock": true, 00:22:48.329 "num_base_bdevs": 3, 00:22:48.329 "num_base_bdevs_discovered": 3, 00:22:48.329 "num_base_bdevs_operational": 3, 00:22:48.329 "base_bdevs_list": [ 00:22:48.329 { 00:22:48.329 "name": "BaseBdev1", 00:22:48.329 "uuid": "f67fa7ba-abe3-5e21-af40-efdce319ba05", 00:22:48.329 "is_configured": true, 00:22:48.329 "data_offset": 2048, 00:22:48.329 "data_size": 63488 00:22:48.329 }, 00:22:48.329 { 00:22:48.329 "name": "BaseBdev2", 00:22:48.329 "uuid": "dd1b29ab-2c89-5946-bc05-51d31ed2b077", 00:22:48.329 "is_configured": true, 00:22:48.329 "data_offset": 2048, 00:22:48.329 "data_size": 63488 00:22:48.329 }, 00:22:48.329 { 00:22:48.329 "name": "BaseBdev3", 00:22:48.329 "uuid": "8257a491-5160-54be-9125-e9c95cd1008b", 00:22:48.329 "is_configured": true, 00:22:48.329 "data_offset": 2048, 00:22:48.329 "data_size": 63488 00:22:48.329 } 00:22:48.329 ] 00:22:48.329 }' 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.329 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.588 [2024-11-20 07:20:12.869458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.588 [2024-11-20 07:20:12.869494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.588 [2024-11-20 07:20:12.872850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.588 [2024-11-20 07:20:12.872907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.588 [2024-11-20 07:20:12.872960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.588 [2024-11-20 07:20:12.872974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:48.588 { 00:22:48.588 "results": [ 00:22:48.588 { 00:22:48.588 "job": "raid_bdev1", 00:22:48.588 "core_mask": "0x1", 00:22:48.588 "workload": "randrw", 00:22:48.588 "percentage": 50, 00:22:48.588 "status": "finished", 00:22:48.588 "queue_depth": 1, 00:22:48.588 "io_size": 131072, 00:22:48.588 "runtime": 1.428411, 00:22:48.588 "iops": 10770.70955068254, 00:22:48.588 "mibps": 1346.3386938353176, 00:22:48.588 "io_failed": 1, 00:22:48.588 "io_timeout": 0, 00:22:48.588 "avg_latency_us": 129.9040648523451, 00:22:48.588 "min_latency_us": 38.63272727272727, 00:22:48.588 "max_latency_us": 1832.0290909090909 00:22:48.588 } 00:22:48.588 ], 00:22:48.588 "core_count": 1 00:22:48.588 } 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65697 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65697 ']' 00:22:48.588 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65697 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65697 00:22:48.847 killing process with pid 65697 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65697' 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65697 00:22:48.847 07:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65697 00:22:48.847 [2024-11-20 07:20:12.912721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:48.847 [2024-11-20 07:20:13.123272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vkSAAD0fkM 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:22:50.224 ************************************ 00:22:50.224 00:22:50.224 real 0m4.737s 00:22:50.224 user 0m5.907s 00:22:50.224 sys 0m0.594s 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.224 07:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.224 END TEST raid_write_error_test 00:22:50.224 ************************************ 00:22:50.224 07:20:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:50.224 07:20:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:22:50.224 07:20:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:50.224 07:20:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.224 07:20:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:50.224 ************************************ 00:22:50.224 START TEST raid_state_function_test 00:22:50.224 ************************************ 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:50.224 Process raid pid: 65835 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65835 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65835' 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65835 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65835 ']' 00:22:50.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.224 07:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.224 [2024-11-20 07:20:14.384769] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:50.224 [2024-11-20 07:20:14.385857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.484 [2024-11-20 07:20:14.576211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.484 [2024-11-20 07:20:14.707366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.743 [2024-11-20 07:20:14.919722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:50.743 [2024-11-20 07:20:14.920007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.312 [2024-11-20 07:20:15.358039] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.312 [2024-11-20 07:20:15.358120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.312 [2024-11-20 07:20:15.358138] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:51.312 [2024-11-20 07:20:15.358170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:51.312 [2024-11-20 07:20:15.358180] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:51.312 [2024-11-20 07:20:15.358194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.312 "name": "Existed_Raid", 00:22:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.312 "strip_size_kb": 64, 00:22:51.312 "state": "configuring", 00:22:51.312 "raid_level": "concat", 00:22:51.312 "superblock": false, 00:22:51.312 "num_base_bdevs": 3, 00:22:51.312 "num_base_bdevs_discovered": 0, 00:22:51.312 "num_base_bdevs_operational": 3, 00:22:51.312 "base_bdevs_list": [ 00:22:51.312 { 00:22:51.312 "name": "BaseBdev1", 00:22:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.312 "is_configured": false, 00:22:51.312 "data_offset": 0, 00:22:51.312 "data_size": 0 00:22:51.312 }, 00:22:51.312 { 00:22:51.312 "name": "BaseBdev2", 00:22:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.312 "is_configured": false, 00:22:51.312 "data_offset": 0, 00:22:51.312 "data_size": 0 00:22:51.312 }, 00:22:51.312 { 00:22:51.312 "name": "BaseBdev3", 00:22:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.312 "is_configured": false, 00:22:51.312 "data_offset": 0, 00:22:51.312 "data_size": 0 00:22:51.312 } 00:22:51.312 ] 00:22:51.312 }' 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.312 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 [2024-11-20 07:20:15.882118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:51.880 [2024-11-20 07:20:15.882164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 [2024-11-20 07:20:15.894113] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.880 [2024-11-20 07:20:15.894179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.880 [2024-11-20 07:20:15.894212] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:51.880 [2024-11-20 07:20:15.894228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:51.880 [2024-11-20 07:20:15.894238] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:51.880 [2024-11-20 07:20:15.894253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 [2024-11-20 07:20:15.940504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.880 BaseBdev1 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.880 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.880 [ 00:22:51.880 { 00:22:51.880 "name": "BaseBdev1", 00:22:51.880 "aliases": [ 00:22:51.880 "8ce202ca-d7d7-4325-98f2-fcb2a99de922" 00:22:51.880 ], 00:22:51.880 "product_name": "Malloc disk", 00:22:51.880 "block_size": 512, 00:22:51.880 "num_blocks": 65536, 00:22:51.880 "uuid": "8ce202ca-d7d7-4325-98f2-fcb2a99de922", 00:22:51.880 "assigned_rate_limits": { 00:22:51.880 "rw_ios_per_sec": 0, 00:22:51.880 "rw_mbytes_per_sec": 0, 00:22:51.880 "r_mbytes_per_sec": 0, 00:22:51.880 "w_mbytes_per_sec": 0 00:22:51.880 }, 00:22:51.880 "claimed": true, 00:22:51.880 "claim_type": "exclusive_write", 00:22:51.880 "zoned": false, 00:22:51.880 "supported_io_types": { 00:22:51.880 "read": true, 00:22:51.880 "write": true, 00:22:51.880 "unmap": true, 00:22:51.880 "flush": true, 00:22:51.880 "reset": true, 00:22:51.880 "nvme_admin": false, 00:22:51.880 "nvme_io": false, 00:22:51.880 "nvme_io_md": false, 00:22:51.880 "write_zeroes": true, 00:22:51.880 "zcopy": true, 00:22:51.880 "get_zone_info": false, 00:22:51.880 "zone_management": false, 00:22:51.881 "zone_append": false, 00:22:51.881 "compare": false, 00:22:51.881 "compare_and_write": false, 00:22:51.881 "abort": true, 00:22:51.881 "seek_hole": false, 00:22:51.881 "seek_data": false, 00:22:51.881 "copy": true, 00:22:51.881 "nvme_iov_md": false 00:22:51.881 }, 00:22:51.881 "memory_domains": [ 00:22:51.881 { 00:22:51.881 "dma_device_id": "system", 00:22:51.881 "dma_device_type": 1 00:22:51.881 }, 00:22:51.881 { 00:22:51.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.881 "dma_device_type": 2 00:22:51.881 } 00:22:51.881 ], 00:22:51.881 "driver_specific": {} 00:22:51.881 } 00:22:51.881 ] 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.881 07:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.881 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.881 "name": "Existed_Raid", 00:22:51.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.881 "strip_size_kb": 64, 00:22:51.881 "state": "configuring", 00:22:51.881 "raid_level": "concat", 00:22:51.881 "superblock": false, 00:22:51.881 "num_base_bdevs": 3, 00:22:51.881 "num_base_bdevs_discovered": 1, 00:22:51.881 "num_base_bdevs_operational": 3, 00:22:51.881 "base_bdevs_list": [ 00:22:51.881 { 00:22:51.881 "name": "BaseBdev1", 00:22:51.881 "uuid": "8ce202ca-d7d7-4325-98f2-fcb2a99de922", 00:22:51.881 "is_configured": true, 00:22:51.881 "data_offset": 0, 00:22:51.881 "data_size": 65536 00:22:51.881 }, 00:22:51.881 { 00:22:51.881 "name": "BaseBdev2", 00:22:51.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.881 "is_configured": false, 00:22:51.881 "data_offset": 0, 00:22:51.881 "data_size": 0 00:22:51.881 }, 00:22:51.881 { 00:22:51.881 "name": "BaseBdev3", 00:22:51.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.881 "is_configured": false, 00:22:51.881 "data_offset": 0, 00:22:51.881 "data_size": 0 00:22:51.881 } 00:22:51.881 ] 00:22:51.881 }' 00:22:51.881 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.881 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.454 [2024-11-20 07:20:16.492787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:52.454 [2024-11-20 07:20:16.492863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.454 [2024-11-20 07:20:16.500864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:52.454 [2024-11-20 07:20:16.503473] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:52.454 [2024-11-20 07:20:16.503574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:52.454 [2024-11-20 07:20:16.503590] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:52.454 [2024-11-20 07:20:16.503658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.454 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.454 "name": "Existed_Raid", 00:22:52.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.454 "strip_size_kb": 64, 00:22:52.454 "state": "configuring", 00:22:52.454 "raid_level": "concat", 00:22:52.454 "superblock": false, 00:22:52.454 "num_base_bdevs": 3, 00:22:52.454 "num_base_bdevs_discovered": 1, 00:22:52.454 "num_base_bdevs_operational": 3, 00:22:52.454 "base_bdevs_list": [ 00:22:52.454 { 00:22:52.454 "name": "BaseBdev1", 00:22:52.454 "uuid": "8ce202ca-d7d7-4325-98f2-fcb2a99de922", 00:22:52.454 "is_configured": true, 00:22:52.454 "data_offset": 0, 00:22:52.454 "data_size": 65536 00:22:52.454 }, 00:22:52.454 { 00:22:52.454 "name": "BaseBdev2", 00:22:52.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.454 "is_configured": false, 00:22:52.454 "data_offset": 0, 00:22:52.454 "data_size": 0 00:22:52.454 }, 00:22:52.454 { 00:22:52.454 "name": "BaseBdev3", 00:22:52.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.454 "is_configured": false, 00:22:52.454 "data_offset": 0, 00:22:52.455 "data_size": 0 00:22:52.455 } 00:22:52.455 ] 00:22:52.455 }' 00:22:52.455 07:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.455 07:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.037 [2024-11-20 07:20:17.075001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.037 BaseBdev2 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.037 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.037 [ 00:22:53.037 { 00:22:53.037 "name": "BaseBdev2", 00:22:53.037 "aliases": [ 00:22:53.037 "ec0a7041-2d24-412e-8bad-5f5aa93093f7" 00:22:53.037 ], 00:22:53.037 "product_name": "Malloc disk", 00:22:53.037 "block_size": 512, 00:22:53.037 "num_blocks": 65536, 00:22:53.037 "uuid": "ec0a7041-2d24-412e-8bad-5f5aa93093f7", 00:22:53.037 "assigned_rate_limits": { 00:22:53.037 "rw_ios_per_sec": 0, 00:22:53.037 "rw_mbytes_per_sec": 0, 00:22:53.037 "r_mbytes_per_sec": 0, 00:22:53.037 "w_mbytes_per_sec": 0 00:22:53.037 }, 00:22:53.037 "claimed": true, 00:22:53.037 "claim_type": "exclusive_write", 00:22:53.037 "zoned": false, 00:22:53.037 "supported_io_types": { 00:22:53.037 "read": true, 00:22:53.037 "write": true, 00:22:53.037 "unmap": true, 00:22:53.037 "flush": true, 00:22:53.037 "reset": true, 00:22:53.038 "nvme_admin": false, 00:22:53.038 "nvme_io": false, 00:22:53.038 "nvme_io_md": false, 00:22:53.038 "write_zeroes": true, 00:22:53.038 "zcopy": true, 00:22:53.038 "get_zone_info": false, 00:22:53.038 "zone_management": false, 00:22:53.038 "zone_append": false, 00:22:53.038 "compare": false, 00:22:53.038 "compare_and_write": false, 00:22:53.038 "abort": true, 00:22:53.038 "seek_hole": false, 00:22:53.038 "seek_data": false, 00:22:53.038 "copy": true, 00:22:53.038 "nvme_iov_md": false 00:22:53.038 }, 00:22:53.038 "memory_domains": [ 00:22:53.038 { 00:22:53.038 "dma_device_id": "system", 00:22:53.038 "dma_device_type": 1 00:22:53.038 }, 00:22:53.038 { 00:22:53.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.038 "dma_device_type": 2 00:22:53.038 } 00:22:53.038 ], 00:22:53.038 "driver_specific": {} 00:22:53.038 } 00:22:53.038 ] 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.038 "name": "Existed_Raid", 00:22:53.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.038 "strip_size_kb": 64, 00:22:53.038 "state": "configuring", 00:22:53.038 "raid_level": "concat", 00:22:53.038 "superblock": false, 00:22:53.038 "num_base_bdevs": 3, 00:22:53.038 "num_base_bdevs_discovered": 2, 00:22:53.038 "num_base_bdevs_operational": 3, 00:22:53.038 "base_bdevs_list": [ 00:22:53.038 { 00:22:53.038 "name": "BaseBdev1", 00:22:53.038 "uuid": "8ce202ca-d7d7-4325-98f2-fcb2a99de922", 00:22:53.038 "is_configured": true, 00:22:53.038 "data_offset": 0, 00:22:53.038 "data_size": 65536 00:22:53.038 }, 00:22:53.038 { 00:22:53.038 "name": "BaseBdev2", 00:22:53.038 "uuid": "ec0a7041-2d24-412e-8bad-5f5aa93093f7", 00:22:53.038 "is_configured": true, 00:22:53.038 "data_offset": 0, 00:22:53.038 "data_size": 65536 00:22:53.038 }, 00:22:53.038 { 00:22:53.038 "name": "BaseBdev3", 00:22:53.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.038 "is_configured": false, 00:22:53.038 "data_offset": 0, 00:22:53.038 "data_size": 0 00:22:53.038 } 00:22:53.038 ] 00:22:53.038 }' 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.038 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.605 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:53.605 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.605 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.606 [2024-11-20 07:20:17.703036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:53.606 [2024-11-20 07:20:17.703284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:53.606 [2024-11-20 07:20:17.703320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:53.606 [2024-11-20 07:20:17.703692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:53.606 [2024-11-20 07:20:17.703923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:53.606 [2024-11-20 07:20:17.703941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:53.606 [2024-11-20 07:20:17.704260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.606 BaseBdev3 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.606 [ 00:22:53.606 { 00:22:53.606 "name": "BaseBdev3", 00:22:53.606 "aliases": [ 00:22:53.606 "270306ae-db98-4229-86bf-56209a3f09c7" 00:22:53.606 ], 00:22:53.606 "product_name": "Malloc disk", 00:22:53.606 "block_size": 512, 00:22:53.606 "num_blocks": 65536, 00:22:53.606 "uuid": "270306ae-db98-4229-86bf-56209a3f09c7", 00:22:53.606 "assigned_rate_limits": { 00:22:53.606 "rw_ios_per_sec": 0, 00:22:53.606 "rw_mbytes_per_sec": 0, 00:22:53.606 "r_mbytes_per_sec": 0, 00:22:53.606 "w_mbytes_per_sec": 0 00:22:53.606 }, 00:22:53.606 "claimed": true, 00:22:53.606 "claim_type": "exclusive_write", 00:22:53.606 "zoned": false, 00:22:53.606 "supported_io_types": { 00:22:53.606 "read": true, 00:22:53.606 "write": true, 00:22:53.606 "unmap": true, 00:22:53.606 "flush": true, 00:22:53.606 "reset": true, 00:22:53.606 "nvme_admin": false, 00:22:53.606 "nvme_io": false, 00:22:53.606 "nvme_io_md": false, 00:22:53.606 "write_zeroes": true, 00:22:53.606 "zcopy": true, 00:22:53.606 "get_zone_info": false, 00:22:53.606 "zone_management": false, 00:22:53.606 "zone_append": false, 00:22:53.606 "compare": false, 00:22:53.606 "compare_and_write": false, 00:22:53.606 "abort": true, 00:22:53.606 "seek_hole": false, 00:22:53.606 "seek_data": false, 00:22:53.606 "copy": true, 00:22:53.606 "nvme_iov_md": false 00:22:53.606 }, 00:22:53.606 "memory_domains": [ 00:22:53.606 { 00:22:53.606 "dma_device_id": "system", 00:22:53.606 "dma_device_type": 1 00:22:53.606 }, 00:22:53.606 { 00:22:53.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.606 "dma_device_type": 2 00:22:53.606 } 00:22:53.606 ], 00:22:53.606 "driver_specific": {} 00:22:53.606 } 00:22:53.606 ] 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.606 "name": "Existed_Raid", 00:22:53.606 "uuid": "43626a70-3dd6-4204-b534-b2978881ecb2", 00:22:53.606 "strip_size_kb": 64, 00:22:53.606 "state": "online", 00:22:53.606 "raid_level": "concat", 00:22:53.606 "superblock": false, 00:22:53.606 "num_base_bdevs": 3, 00:22:53.606 "num_base_bdevs_discovered": 3, 00:22:53.606 "num_base_bdevs_operational": 3, 00:22:53.606 "base_bdevs_list": [ 00:22:53.606 { 00:22:53.606 "name": "BaseBdev1", 00:22:53.606 "uuid": "8ce202ca-d7d7-4325-98f2-fcb2a99de922", 00:22:53.606 "is_configured": true, 00:22:53.606 "data_offset": 0, 00:22:53.606 "data_size": 65536 00:22:53.606 }, 00:22:53.606 { 00:22:53.606 "name": "BaseBdev2", 00:22:53.606 "uuid": "ec0a7041-2d24-412e-8bad-5f5aa93093f7", 00:22:53.606 "is_configured": true, 00:22:53.606 "data_offset": 0, 00:22:53.606 "data_size": 65536 00:22:53.606 }, 00:22:53.606 { 00:22:53.606 "name": "BaseBdev3", 00:22:53.606 "uuid": "270306ae-db98-4229-86bf-56209a3f09c7", 00:22:53.606 "is_configured": true, 00:22:53.606 "data_offset": 0, 00:22:53.606 "data_size": 65536 00:22:53.606 } 00:22:53.606 ] 00:22:53.606 }' 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.606 07:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:54.174 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.175 [2024-11-20 07:20:18.299738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.175 "name": "Existed_Raid", 00:22:54.175 "aliases": [ 00:22:54.175 "43626a70-3dd6-4204-b534-b2978881ecb2" 00:22:54.175 ], 00:22:54.175 "product_name": "Raid Volume", 00:22:54.175 "block_size": 512, 00:22:54.175 "num_blocks": 196608, 00:22:54.175 "uuid": "43626a70-3dd6-4204-b534-b2978881ecb2", 00:22:54.175 "assigned_rate_limits": { 00:22:54.175 "rw_ios_per_sec": 0, 00:22:54.175 "rw_mbytes_per_sec": 0, 00:22:54.175 "r_mbytes_per_sec": 0, 00:22:54.175 "w_mbytes_per_sec": 0 00:22:54.175 }, 00:22:54.175 "claimed": false, 00:22:54.175 "zoned": false, 00:22:54.175 "supported_io_types": { 00:22:54.175 "read": true, 00:22:54.175 "write": true, 00:22:54.175 "unmap": true, 00:22:54.175 "flush": true, 00:22:54.175 "reset": true, 00:22:54.175 "nvme_admin": false, 00:22:54.175 "nvme_io": false, 00:22:54.175 "nvme_io_md": false, 00:22:54.175 "write_zeroes": true, 00:22:54.175 "zcopy": false, 00:22:54.175 "get_zone_info": false, 00:22:54.175 "zone_management": false, 00:22:54.175 "zone_append": false, 00:22:54.175 "compare": false, 00:22:54.175 "compare_and_write": false, 00:22:54.175 "abort": false, 00:22:54.175 "seek_hole": false, 00:22:54.175 "seek_data": false, 00:22:54.175 "copy": false, 00:22:54.175 "nvme_iov_md": false 00:22:54.175 }, 00:22:54.175 "memory_domains": [ 00:22:54.175 { 00:22:54.175 "dma_device_id": "system", 00:22:54.175 "dma_device_type": 1 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.175 "dma_device_type": 2 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "dma_device_id": "system", 00:22:54.175 "dma_device_type": 1 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.175 "dma_device_type": 2 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "dma_device_id": "system", 00:22:54.175 "dma_device_type": 1 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.175 "dma_device_type": 2 00:22:54.175 } 00:22:54.175 ], 00:22:54.175 "driver_specific": { 00:22:54.175 "raid": { 00:22:54.175 "uuid": "43626a70-3dd6-4204-b534-b2978881ecb2", 00:22:54.175 "strip_size_kb": 64, 00:22:54.175 "state": "online", 00:22:54.175 "raid_level": "concat", 00:22:54.175 "superblock": false, 00:22:54.175 "num_base_bdevs": 3, 00:22:54.175 "num_base_bdevs_discovered": 3, 00:22:54.175 "num_base_bdevs_operational": 3, 00:22:54.175 "base_bdevs_list": [ 00:22:54.175 { 00:22:54.175 "name": "BaseBdev1", 00:22:54.175 "uuid": "8ce202ca-d7d7-4325-98f2-fcb2a99de922", 00:22:54.175 "is_configured": true, 00:22:54.175 "data_offset": 0, 00:22:54.175 "data_size": 65536 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "name": "BaseBdev2", 00:22:54.175 "uuid": "ec0a7041-2d24-412e-8bad-5f5aa93093f7", 00:22:54.175 "is_configured": true, 00:22:54.175 "data_offset": 0, 00:22:54.175 "data_size": 65536 00:22:54.175 }, 00:22:54.175 { 00:22:54.175 "name": "BaseBdev3", 00:22:54.175 "uuid": "270306ae-db98-4229-86bf-56209a3f09c7", 00:22:54.175 "is_configured": true, 00:22:54.175 "data_offset": 0, 00:22:54.175 "data_size": 65536 00:22:54.175 } 00:22:54.175 ] 00:22:54.175 } 00:22:54.175 } 00:22:54.175 }' 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:54.175 BaseBdev2 00:22:54.175 BaseBdev3' 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.175 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.435 [2024-11-20 07:20:18.623437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:54.435 [2024-11-20 07:20:18.623471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.435 [2024-11-20 07:20:18.623539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.435 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.695 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.695 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.695 "name": "Existed_Raid", 00:22:54.695 "uuid": "43626a70-3dd6-4204-b534-b2978881ecb2", 00:22:54.695 "strip_size_kb": 64, 00:22:54.695 "state": "offline", 00:22:54.695 "raid_level": "concat", 00:22:54.695 "superblock": false, 00:22:54.695 "num_base_bdevs": 3, 00:22:54.695 "num_base_bdevs_discovered": 2, 00:22:54.695 "num_base_bdevs_operational": 2, 00:22:54.695 "base_bdevs_list": [ 00:22:54.695 { 00:22:54.695 "name": null, 00:22:54.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.695 "is_configured": false, 00:22:54.695 "data_offset": 0, 00:22:54.695 "data_size": 65536 00:22:54.695 }, 00:22:54.695 { 00:22:54.695 "name": "BaseBdev2", 00:22:54.695 "uuid": "ec0a7041-2d24-412e-8bad-5f5aa93093f7", 00:22:54.695 "is_configured": true, 00:22:54.695 "data_offset": 0, 00:22:54.695 "data_size": 65536 00:22:54.695 }, 00:22:54.695 { 00:22:54.695 "name": "BaseBdev3", 00:22:54.695 "uuid": "270306ae-db98-4229-86bf-56209a3f09c7", 00:22:54.695 "is_configured": true, 00:22:54.695 "data_offset": 0, 00:22:54.695 "data_size": 65536 00:22:54.695 } 00:22:54.695 ] 00:22:54.695 }' 00:22:54.695 07:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.695 07:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.954 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:54.954 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:54.954 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.954 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:54.954 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.954 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.213 [2024-11-20 07:20:19.287723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.213 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.213 [2024-11-20 07:20:19.437563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:55.213 [2024-11-20 07:20:19.437650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.472 BaseBdev2 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.472 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.472 [ 00:22:55.472 { 00:22:55.472 "name": "BaseBdev2", 00:22:55.472 "aliases": [ 00:22:55.472 "6f961bca-3656-4638-b466-119451fe98ce" 00:22:55.472 ], 00:22:55.472 "product_name": "Malloc disk", 00:22:55.472 "block_size": 512, 00:22:55.472 "num_blocks": 65536, 00:22:55.472 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:55.472 "assigned_rate_limits": { 00:22:55.472 "rw_ios_per_sec": 0, 00:22:55.472 "rw_mbytes_per_sec": 0, 00:22:55.472 "r_mbytes_per_sec": 0, 00:22:55.472 "w_mbytes_per_sec": 0 00:22:55.472 }, 00:22:55.472 "claimed": false, 00:22:55.472 "zoned": false, 00:22:55.472 "supported_io_types": { 00:22:55.472 "read": true, 00:22:55.472 "write": true, 00:22:55.472 "unmap": true, 00:22:55.472 "flush": true, 00:22:55.472 "reset": true, 00:22:55.472 "nvme_admin": false, 00:22:55.472 "nvme_io": false, 00:22:55.472 "nvme_io_md": false, 00:22:55.472 "write_zeroes": true, 00:22:55.472 "zcopy": true, 00:22:55.472 "get_zone_info": false, 00:22:55.472 "zone_management": false, 00:22:55.473 "zone_append": false, 00:22:55.473 "compare": false, 00:22:55.473 "compare_and_write": false, 00:22:55.473 "abort": true, 00:22:55.473 "seek_hole": false, 00:22:55.473 "seek_data": false, 00:22:55.473 "copy": true, 00:22:55.473 "nvme_iov_md": false 00:22:55.473 }, 00:22:55.473 "memory_domains": [ 00:22:55.473 { 00:22:55.473 "dma_device_id": "system", 00:22:55.473 "dma_device_type": 1 00:22:55.473 }, 00:22:55.473 { 00:22:55.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.473 "dma_device_type": 2 00:22:55.473 } 00:22:55.473 ], 00:22:55.473 "driver_specific": {} 00:22:55.473 } 00:22:55.473 ] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.473 BaseBdev3 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.473 [ 00:22:55.473 { 00:22:55.473 "name": "BaseBdev3", 00:22:55.473 "aliases": [ 00:22:55.473 "ff1bacc1-62c8-4883-9fc0-5aee3da085d5" 00:22:55.473 ], 00:22:55.473 "product_name": "Malloc disk", 00:22:55.473 "block_size": 512, 00:22:55.473 "num_blocks": 65536, 00:22:55.473 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:55.473 "assigned_rate_limits": { 00:22:55.473 "rw_ios_per_sec": 0, 00:22:55.473 "rw_mbytes_per_sec": 0, 00:22:55.473 "r_mbytes_per_sec": 0, 00:22:55.473 "w_mbytes_per_sec": 0 00:22:55.473 }, 00:22:55.473 "claimed": false, 00:22:55.473 "zoned": false, 00:22:55.473 "supported_io_types": { 00:22:55.473 "read": true, 00:22:55.473 "write": true, 00:22:55.473 "unmap": true, 00:22:55.473 "flush": true, 00:22:55.473 "reset": true, 00:22:55.473 "nvme_admin": false, 00:22:55.473 "nvme_io": false, 00:22:55.473 "nvme_io_md": false, 00:22:55.473 "write_zeroes": true, 00:22:55.473 "zcopy": true, 00:22:55.473 "get_zone_info": false, 00:22:55.473 "zone_management": false, 00:22:55.473 "zone_append": false, 00:22:55.473 "compare": false, 00:22:55.473 "compare_and_write": false, 00:22:55.473 "abort": true, 00:22:55.473 "seek_hole": false, 00:22:55.473 "seek_data": false, 00:22:55.473 "copy": true, 00:22:55.473 "nvme_iov_md": false 00:22:55.473 }, 00:22:55.473 "memory_domains": [ 00:22:55.473 { 00:22:55.473 "dma_device_id": "system", 00:22:55.473 "dma_device_type": 1 00:22:55.473 }, 00:22:55.473 { 00:22:55.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.473 "dma_device_type": 2 00:22:55.473 } 00:22:55.473 ], 00:22:55.473 "driver_specific": {} 00:22:55.473 } 00:22:55.473 ] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.473 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.732 [2024-11-20 07:20:19.761403] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:55.732 [2024-11-20 07:20:19.761609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:55.732 [2024-11-20 07:20:19.761775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.732 [2024-11-20 07:20:19.764415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.732 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.733 "name": "Existed_Raid", 00:22:55.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.733 "strip_size_kb": 64, 00:22:55.733 "state": "configuring", 00:22:55.733 "raid_level": "concat", 00:22:55.733 "superblock": false, 00:22:55.733 "num_base_bdevs": 3, 00:22:55.733 "num_base_bdevs_discovered": 2, 00:22:55.733 "num_base_bdevs_operational": 3, 00:22:55.733 "base_bdevs_list": [ 00:22:55.733 { 00:22:55.733 "name": "BaseBdev1", 00:22:55.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.733 "is_configured": false, 00:22:55.733 "data_offset": 0, 00:22:55.733 "data_size": 0 00:22:55.733 }, 00:22:55.733 { 00:22:55.733 "name": "BaseBdev2", 00:22:55.733 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:55.733 "is_configured": true, 00:22:55.733 "data_offset": 0, 00:22:55.733 "data_size": 65536 00:22:55.733 }, 00:22:55.733 { 00:22:55.733 "name": "BaseBdev3", 00:22:55.733 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:55.733 "is_configured": true, 00:22:55.733 "data_offset": 0, 00:22:55.733 "data_size": 65536 00:22:55.733 } 00:22:55.733 ] 00:22:55.733 }' 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.733 07:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.298 [2024-11-20 07:20:20.305612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.298 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.298 "name": "Existed_Raid", 00:22:56.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.298 "strip_size_kb": 64, 00:22:56.298 "state": "configuring", 00:22:56.298 "raid_level": "concat", 00:22:56.298 "superblock": false, 00:22:56.298 "num_base_bdevs": 3, 00:22:56.298 "num_base_bdevs_discovered": 1, 00:22:56.298 "num_base_bdevs_operational": 3, 00:22:56.298 "base_bdevs_list": [ 00:22:56.298 { 00:22:56.298 "name": "BaseBdev1", 00:22:56.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.298 "is_configured": false, 00:22:56.298 "data_offset": 0, 00:22:56.298 "data_size": 0 00:22:56.298 }, 00:22:56.299 { 00:22:56.299 "name": null, 00:22:56.299 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:56.299 "is_configured": false, 00:22:56.299 "data_offset": 0, 00:22:56.299 "data_size": 65536 00:22:56.299 }, 00:22:56.299 { 00:22:56.299 "name": "BaseBdev3", 00:22:56.299 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:56.299 "is_configured": true, 00:22:56.299 "data_offset": 0, 00:22:56.299 "data_size": 65536 00:22:56.299 } 00:22:56.299 ] 00:22:56.299 }' 00:22:56.299 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.299 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.865 [2024-11-20 07:20:20.947112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.865 BaseBdev1 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.865 [ 00:22:56.865 { 00:22:56.865 "name": "BaseBdev1", 00:22:56.865 "aliases": [ 00:22:56.865 "18babf3f-8196-40d6-955a-983dd5d51b99" 00:22:56.865 ], 00:22:56.865 "product_name": "Malloc disk", 00:22:56.865 "block_size": 512, 00:22:56.865 "num_blocks": 65536, 00:22:56.865 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:56.865 "assigned_rate_limits": { 00:22:56.865 "rw_ios_per_sec": 0, 00:22:56.865 "rw_mbytes_per_sec": 0, 00:22:56.865 "r_mbytes_per_sec": 0, 00:22:56.865 "w_mbytes_per_sec": 0 00:22:56.865 }, 00:22:56.865 "claimed": true, 00:22:56.865 "claim_type": "exclusive_write", 00:22:56.865 "zoned": false, 00:22:56.865 "supported_io_types": { 00:22:56.865 "read": true, 00:22:56.865 "write": true, 00:22:56.865 "unmap": true, 00:22:56.865 "flush": true, 00:22:56.865 "reset": true, 00:22:56.865 "nvme_admin": false, 00:22:56.865 "nvme_io": false, 00:22:56.865 "nvme_io_md": false, 00:22:56.865 "write_zeroes": true, 00:22:56.865 "zcopy": true, 00:22:56.865 "get_zone_info": false, 00:22:56.865 "zone_management": false, 00:22:56.865 "zone_append": false, 00:22:56.865 "compare": false, 00:22:56.865 "compare_and_write": false, 00:22:56.865 "abort": true, 00:22:56.865 "seek_hole": false, 00:22:56.865 "seek_data": false, 00:22:56.865 "copy": true, 00:22:56.865 "nvme_iov_md": false 00:22:56.865 }, 00:22:56.865 "memory_domains": [ 00:22:56.865 { 00:22:56.865 "dma_device_id": "system", 00:22:56.865 "dma_device_type": 1 00:22:56.865 }, 00:22:56.865 { 00:22:56.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.865 "dma_device_type": 2 00:22:56.865 } 00:22:56.865 ], 00:22:56.865 "driver_specific": {} 00:22:56.865 } 00:22:56.865 ] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:56.865 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.866 07:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.866 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.866 "name": "Existed_Raid", 00:22:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.866 "strip_size_kb": 64, 00:22:56.866 "state": "configuring", 00:22:56.866 "raid_level": "concat", 00:22:56.866 "superblock": false, 00:22:56.866 "num_base_bdevs": 3, 00:22:56.866 "num_base_bdevs_discovered": 2, 00:22:56.866 "num_base_bdevs_operational": 3, 00:22:56.866 "base_bdevs_list": [ 00:22:56.866 { 00:22:56.866 "name": "BaseBdev1", 00:22:56.866 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:56.866 "is_configured": true, 00:22:56.866 "data_offset": 0, 00:22:56.866 "data_size": 65536 00:22:56.866 }, 00:22:56.866 { 00:22:56.866 "name": null, 00:22:56.866 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:56.866 "is_configured": false, 00:22:56.866 "data_offset": 0, 00:22:56.866 "data_size": 65536 00:22:56.866 }, 00:22:56.866 { 00:22:56.866 "name": "BaseBdev3", 00:22:56.866 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:56.866 "is_configured": true, 00:22:56.866 "data_offset": 0, 00:22:56.866 "data_size": 65536 00:22:56.866 } 00:22:56.866 ] 00:22:56.866 }' 00:22:56.866 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.866 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.433 [2024-11-20 07:20:21.579384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.433 "name": "Existed_Raid", 00:22:57.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.433 "strip_size_kb": 64, 00:22:57.433 "state": "configuring", 00:22:57.433 "raid_level": "concat", 00:22:57.433 "superblock": false, 00:22:57.433 "num_base_bdevs": 3, 00:22:57.433 "num_base_bdevs_discovered": 1, 00:22:57.433 "num_base_bdevs_operational": 3, 00:22:57.433 "base_bdevs_list": [ 00:22:57.433 { 00:22:57.433 "name": "BaseBdev1", 00:22:57.433 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:57.433 "is_configured": true, 00:22:57.433 "data_offset": 0, 00:22:57.433 "data_size": 65536 00:22:57.433 }, 00:22:57.433 { 00:22:57.433 "name": null, 00:22:57.433 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:57.433 "is_configured": false, 00:22:57.433 "data_offset": 0, 00:22:57.433 "data_size": 65536 00:22:57.433 }, 00:22:57.433 { 00:22:57.433 "name": null, 00:22:57.433 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:57.433 "is_configured": false, 00:22:57.433 "data_offset": 0, 00:22:57.433 "data_size": 65536 00:22:57.433 } 00:22:57.433 ] 00:22:57.433 }' 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.433 07:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.042 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.042 [2024-11-20 07:20:22.163588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.043 "name": "Existed_Raid", 00:22:58.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.043 "strip_size_kb": 64, 00:22:58.043 "state": "configuring", 00:22:58.043 "raid_level": "concat", 00:22:58.043 "superblock": false, 00:22:58.043 "num_base_bdevs": 3, 00:22:58.043 "num_base_bdevs_discovered": 2, 00:22:58.043 "num_base_bdevs_operational": 3, 00:22:58.043 "base_bdevs_list": [ 00:22:58.043 { 00:22:58.043 "name": "BaseBdev1", 00:22:58.043 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:58.043 "is_configured": true, 00:22:58.043 "data_offset": 0, 00:22:58.043 "data_size": 65536 00:22:58.043 }, 00:22:58.043 { 00:22:58.043 "name": null, 00:22:58.043 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:58.043 "is_configured": false, 00:22:58.043 "data_offset": 0, 00:22:58.043 "data_size": 65536 00:22:58.043 }, 00:22:58.043 { 00:22:58.043 "name": "BaseBdev3", 00:22:58.043 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:58.043 "is_configured": true, 00:22:58.043 "data_offset": 0, 00:22:58.043 "data_size": 65536 00:22:58.043 } 00:22:58.043 ] 00:22:58.043 }' 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.043 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.609 [2024-11-20 07:20:22.743836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.609 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.609 "name": "Existed_Raid", 00:22:58.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.609 "strip_size_kb": 64, 00:22:58.609 "state": "configuring", 00:22:58.609 "raid_level": "concat", 00:22:58.609 "superblock": false, 00:22:58.609 "num_base_bdevs": 3, 00:22:58.609 "num_base_bdevs_discovered": 1, 00:22:58.609 "num_base_bdevs_operational": 3, 00:22:58.609 "base_bdevs_list": [ 00:22:58.609 { 00:22:58.609 "name": null, 00:22:58.609 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:58.609 "is_configured": false, 00:22:58.609 "data_offset": 0, 00:22:58.609 "data_size": 65536 00:22:58.609 }, 00:22:58.609 { 00:22:58.610 "name": null, 00:22:58.610 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:58.610 "is_configured": false, 00:22:58.610 "data_offset": 0, 00:22:58.610 "data_size": 65536 00:22:58.610 }, 00:22:58.610 { 00:22:58.610 "name": "BaseBdev3", 00:22:58.610 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:58.610 "is_configured": true, 00:22:58.610 "data_offset": 0, 00:22:58.610 "data_size": 65536 00:22:58.610 } 00:22:58.610 ] 00:22:58.610 }' 00:22:58.610 07:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.610 07:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.176 [2024-11-20 07:20:23.415474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:59.176 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.177 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.435 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.435 "name": "Existed_Raid", 00:22:59.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.435 "strip_size_kb": 64, 00:22:59.435 "state": "configuring", 00:22:59.435 "raid_level": "concat", 00:22:59.435 "superblock": false, 00:22:59.435 "num_base_bdevs": 3, 00:22:59.435 "num_base_bdevs_discovered": 2, 00:22:59.435 "num_base_bdevs_operational": 3, 00:22:59.435 "base_bdevs_list": [ 00:22:59.435 { 00:22:59.435 "name": null, 00:22:59.435 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:59.435 "is_configured": false, 00:22:59.435 "data_offset": 0, 00:22:59.435 "data_size": 65536 00:22:59.435 }, 00:22:59.435 { 00:22:59.435 "name": "BaseBdev2", 00:22:59.435 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:59.435 "is_configured": true, 00:22:59.435 "data_offset": 0, 00:22:59.435 "data_size": 65536 00:22:59.435 }, 00:22:59.435 { 00:22:59.435 "name": "BaseBdev3", 00:22:59.435 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:59.435 "is_configured": true, 00:22:59.435 "data_offset": 0, 00:22:59.435 "data_size": 65536 00:22:59.435 } 00:22:59.435 ] 00:22:59.435 }' 00:22:59.435 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.435 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.694 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.694 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.694 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.694 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:59.694 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.954 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:59.954 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.954 07:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:59.954 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.954 07:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18babf3f-8196-40d6-955a-983dd5d51b99 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.954 [2024-11-20 07:20:24.088319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:59.954 [2024-11-20 07:20:24.088548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:59.954 [2024-11-20 07:20:24.088579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:59.954 [2024-11-20 07:20:24.088935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:59.954 [2024-11-20 07:20:24.089125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:59.954 [2024-11-20 07:20:24.089156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:59.954 [2024-11-20 07:20:24.089471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.954 NewBaseBdev 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.954 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.954 [ 00:22:59.954 { 00:22:59.954 "name": "NewBaseBdev", 00:22:59.954 "aliases": [ 00:22:59.954 "18babf3f-8196-40d6-955a-983dd5d51b99" 00:22:59.954 ], 00:22:59.954 "product_name": "Malloc disk", 00:22:59.954 "block_size": 512, 00:22:59.954 "num_blocks": 65536, 00:22:59.954 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:59.954 "assigned_rate_limits": { 00:22:59.954 "rw_ios_per_sec": 0, 00:22:59.954 "rw_mbytes_per_sec": 0, 00:22:59.954 "r_mbytes_per_sec": 0, 00:22:59.954 "w_mbytes_per_sec": 0 00:22:59.954 }, 00:22:59.954 "claimed": true, 00:22:59.954 "claim_type": "exclusive_write", 00:22:59.955 "zoned": false, 00:22:59.955 "supported_io_types": { 00:22:59.955 "read": true, 00:22:59.955 "write": true, 00:22:59.955 "unmap": true, 00:22:59.955 "flush": true, 00:22:59.955 "reset": true, 00:22:59.955 "nvme_admin": false, 00:22:59.955 "nvme_io": false, 00:22:59.955 "nvme_io_md": false, 00:22:59.955 "write_zeroes": true, 00:22:59.955 "zcopy": true, 00:22:59.955 "get_zone_info": false, 00:22:59.955 "zone_management": false, 00:22:59.955 "zone_append": false, 00:22:59.955 "compare": false, 00:22:59.955 "compare_and_write": false, 00:22:59.955 "abort": true, 00:22:59.955 "seek_hole": false, 00:22:59.955 "seek_data": false, 00:22:59.955 "copy": true, 00:22:59.955 "nvme_iov_md": false 00:22:59.955 }, 00:22:59.955 "memory_domains": [ 00:22:59.955 { 00:22:59.955 "dma_device_id": "system", 00:22:59.955 "dma_device_type": 1 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.955 "dma_device_type": 2 00:22:59.955 } 00:22:59.955 ], 00:22:59.955 "driver_specific": {} 00:22:59.955 } 00:22:59.955 ] 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.955 "name": "Existed_Raid", 00:22:59.955 "uuid": "848eb404-fbcc-462d-9555-e594805f2495", 00:22:59.955 "strip_size_kb": 64, 00:22:59.955 "state": "online", 00:22:59.955 "raid_level": "concat", 00:22:59.955 "superblock": false, 00:22:59.955 "num_base_bdevs": 3, 00:22:59.955 "num_base_bdevs_discovered": 3, 00:22:59.955 "num_base_bdevs_operational": 3, 00:22:59.955 "base_bdevs_list": [ 00:22:59.955 { 00:22:59.955 "name": "NewBaseBdev", 00:22:59.955 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:22:59.955 "is_configured": true, 00:22:59.955 "data_offset": 0, 00:22:59.955 "data_size": 65536 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "name": "BaseBdev2", 00:22:59.955 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:22:59.955 "is_configured": true, 00:22:59.955 "data_offset": 0, 00:22:59.955 "data_size": 65536 00:22:59.955 }, 00:22:59.955 { 00:22:59.955 "name": "BaseBdev3", 00:22:59.955 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:22:59.955 "is_configured": true, 00:22:59.955 "data_offset": 0, 00:22:59.955 "data_size": 65536 00:22:59.955 } 00:22:59.955 ] 00:22:59.955 }' 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.955 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:00.522 [2024-11-20 07:20:24.668918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.522 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.522 "name": "Existed_Raid", 00:23:00.522 "aliases": [ 00:23:00.522 "848eb404-fbcc-462d-9555-e594805f2495" 00:23:00.522 ], 00:23:00.522 "product_name": "Raid Volume", 00:23:00.522 "block_size": 512, 00:23:00.522 "num_blocks": 196608, 00:23:00.522 "uuid": "848eb404-fbcc-462d-9555-e594805f2495", 00:23:00.522 "assigned_rate_limits": { 00:23:00.522 "rw_ios_per_sec": 0, 00:23:00.522 "rw_mbytes_per_sec": 0, 00:23:00.522 "r_mbytes_per_sec": 0, 00:23:00.522 "w_mbytes_per_sec": 0 00:23:00.522 }, 00:23:00.522 "claimed": false, 00:23:00.522 "zoned": false, 00:23:00.522 "supported_io_types": { 00:23:00.522 "read": true, 00:23:00.522 "write": true, 00:23:00.522 "unmap": true, 00:23:00.522 "flush": true, 00:23:00.522 "reset": true, 00:23:00.522 "nvme_admin": false, 00:23:00.522 "nvme_io": false, 00:23:00.522 "nvme_io_md": false, 00:23:00.522 "write_zeroes": true, 00:23:00.522 "zcopy": false, 00:23:00.522 "get_zone_info": false, 00:23:00.522 "zone_management": false, 00:23:00.522 "zone_append": false, 00:23:00.522 "compare": false, 00:23:00.522 "compare_and_write": false, 00:23:00.522 "abort": false, 00:23:00.522 "seek_hole": false, 00:23:00.522 "seek_data": false, 00:23:00.522 "copy": false, 00:23:00.522 "nvme_iov_md": false 00:23:00.522 }, 00:23:00.522 "memory_domains": [ 00:23:00.522 { 00:23:00.522 "dma_device_id": "system", 00:23:00.522 "dma_device_type": 1 00:23:00.522 }, 00:23:00.522 { 00:23:00.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.522 "dma_device_type": 2 00:23:00.522 }, 00:23:00.522 { 00:23:00.522 "dma_device_id": "system", 00:23:00.522 "dma_device_type": 1 00:23:00.522 }, 00:23:00.522 { 00:23:00.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.522 "dma_device_type": 2 00:23:00.522 }, 00:23:00.522 { 00:23:00.522 "dma_device_id": "system", 00:23:00.522 "dma_device_type": 1 00:23:00.522 }, 00:23:00.522 { 00:23:00.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.522 "dma_device_type": 2 00:23:00.523 } 00:23:00.523 ], 00:23:00.523 "driver_specific": { 00:23:00.523 "raid": { 00:23:00.523 "uuid": "848eb404-fbcc-462d-9555-e594805f2495", 00:23:00.523 "strip_size_kb": 64, 00:23:00.523 "state": "online", 00:23:00.523 "raid_level": "concat", 00:23:00.523 "superblock": false, 00:23:00.523 "num_base_bdevs": 3, 00:23:00.523 "num_base_bdevs_discovered": 3, 00:23:00.523 "num_base_bdevs_operational": 3, 00:23:00.523 "base_bdevs_list": [ 00:23:00.523 { 00:23:00.523 "name": "NewBaseBdev", 00:23:00.523 "uuid": "18babf3f-8196-40d6-955a-983dd5d51b99", 00:23:00.523 "is_configured": true, 00:23:00.523 "data_offset": 0, 00:23:00.523 "data_size": 65536 00:23:00.523 }, 00:23:00.523 { 00:23:00.523 "name": "BaseBdev2", 00:23:00.523 "uuid": "6f961bca-3656-4638-b466-119451fe98ce", 00:23:00.523 "is_configured": true, 00:23:00.523 "data_offset": 0, 00:23:00.523 "data_size": 65536 00:23:00.523 }, 00:23:00.523 { 00:23:00.523 "name": "BaseBdev3", 00:23:00.523 "uuid": "ff1bacc1-62c8-4883-9fc0-5aee3da085d5", 00:23:00.523 "is_configured": true, 00:23:00.523 "data_offset": 0, 00:23:00.523 "data_size": 65536 00:23:00.523 } 00:23:00.523 ] 00:23:00.523 } 00:23:00.523 } 00:23:00.523 }' 00:23:00.523 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:00.523 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:00.523 BaseBdev2 00:23:00.523 BaseBdev3' 00:23:00.523 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.781 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.782 07:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.782 [2024-11-20 07:20:25.008704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:00.782 [2024-11-20 07:20:25.008737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.782 [2024-11-20 07:20:25.008827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.782 [2024-11-20 07:20:25.008900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.782 [2024-11-20 07:20:25.008920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65835 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65835 ']' 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65835 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65835 00:23:00.782 killing process with pid 65835 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65835' 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65835 00:23:00.782 [2024-11-20 07:20:25.048116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:00.782 07:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65835 00:23:01.040 [2024-11-20 07:20:25.314854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:02.445 00:23:02.445 real 0m12.091s 00:23:02.445 user 0m20.098s 00:23:02.445 sys 0m1.624s 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.445 ************************************ 00:23:02.445 END TEST raid_state_function_test 00:23:02.445 ************************************ 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.445 07:20:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:23:02.445 07:20:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:02.445 07:20:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.445 07:20:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:02.445 ************************************ 00:23:02.445 START TEST raid_state_function_test_sb 00:23:02.445 ************************************ 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:02.445 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:02.446 Process raid pid: 66473 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66473 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66473' 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66473 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66473 ']' 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.446 07:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.446 [2024-11-20 07:20:26.539786] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:02.446 [2024-11-20 07:20:26.540220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.704 [2024-11-20 07:20:26.734552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.704 [2024-11-20 07:20:26.896012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.962 [2024-11-20 07:20:27.109650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:02.962 [2024-11-20 07:20:27.109711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.220 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.220 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:03.220 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:03.220 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.220 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.220 [2024-11-20 07:20:27.495239] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:03.220 [2024-11-20 07:20:27.495310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:03.220 [2024-11-20 07:20:27.495329] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.221 [2024-11-20 07:20:27.495351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.221 [2024-11-20 07:20:27.495362] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:03.221 [2024-11-20 07:20:27.495378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.221 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.479 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.479 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.479 "name": "Existed_Raid", 00:23:03.479 "uuid": "9fb5db9b-9a83-406d-9d36-cd7cd6416ba9", 00:23:03.479 "strip_size_kb": 64, 00:23:03.479 "state": "configuring", 00:23:03.479 "raid_level": "concat", 00:23:03.479 "superblock": true, 00:23:03.479 "num_base_bdevs": 3, 00:23:03.479 "num_base_bdevs_discovered": 0, 00:23:03.479 "num_base_bdevs_operational": 3, 00:23:03.479 "base_bdevs_list": [ 00:23:03.479 { 00:23:03.479 "name": "BaseBdev1", 00:23:03.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.479 "is_configured": false, 00:23:03.479 "data_offset": 0, 00:23:03.479 "data_size": 0 00:23:03.479 }, 00:23:03.479 { 00:23:03.479 "name": "BaseBdev2", 00:23:03.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.479 "is_configured": false, 00:23:03.479 "data_offset": 0, 00:23:03.479 "data_size": 0 00:23:03.479 }, 00:23:03.479 { 00:23:03.479 "name": "BaseBdev3", 00:23:03.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.479 "is_configured": false, 00:23:03.479 "data_offset": 0, 00:23:03.479 "data_size": 0 00:23:03.479 } 00:23:03.479 ] 00:23:03.479 }' 00:23:03.479 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.479 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.738 07:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:03.738 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.738 07:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.738 [2024-11-20 07:20:28.003288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:03.738 [2024-11-20 07:20:28.003463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.738 [2024-11-20 07:20:28.011285] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:03.738 [2024-11-20 07:20:28.011464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:03.738 [2024-11-20 07:20:28.011604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.738 [2024-11-20 07:20:28.011671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.738 [2024-11-20 07:20:28.011792] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:03.738 [2024-11-20 07:20:28.011853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.738 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.996 [2024-11-20 07:20:28.057742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.996 BaseBdev1 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.996 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.996 [ 00:23:03.996 { 00:23:03.996 "name": "BaseBdev1", 00:23:03.996 "aliases": [ 00:23:03.996 "cef6df9f-6350-4e31-8a09-cfb63a936191" 00:23:03.996 ], 00:23:03.996 "product_name": "Malloc disk", 00:23:03.996 "block_size": 512, 00:23:03.997 "num_blocks": 65536, 00:23:03.997 "uuid": "cef6df9f-6350-4e31-8a09-cfb63a936191", 00:23:03.997 "assigned_rate_limits": { 00:23:03.997 "rw_ios_per_sec": 0, 00:23:03.997 "rw_mbytes_per_sec": 0, 00:23:03.997 "r_mbytes_per_sec": 0, 00:23:03.997 "w_mbytes_per_sec": 0 00:23:03.997 }, 00:23:03.997 "claimed": true, 00:23:03.997 "claim_type": "exclusive_write", 00:23:03.997 "zoned": false, 00:23:03.997 "supported_io_types": { 00:23:03.997 "read": true, 00:23:03.997 "write": true, 00:23:03.997 "unmap": true, 00:23:03.997 "flush": true, 00:23:03.997 "reset": true, 00:23:03.997 "nvme_admin": false, 00:23:03.997 "nvme_io": false, 00:23:03.997 "nvme_io_md": false, 00:23:03.997 "write_zeroes": true, 00:23:03.997 "zcopy": true, 00:23:03.997 "get_zone_info": false, 00:23:03.997 "zone_management": false, 00:23:03.997 "zone_append": false, 00:23:03.997 "compare": false, 00:23:03.997 "compare_and_write": false, 00:23:03.997 "abort": true, 00:23:03.997 "seek_hole": false, 00:23:03.997 "seek_data": false, 00:23:03.997 "copy": true, 00:23:03.997 "nvme_iov_md": false 00:23:03.997 }, 00:23:03.997 "memory_domains": [ 00:23:03.997 { 00:23:03.997 "dma_device_id": "system", 00:23:03.997 "dma_device_type": 1 00:23:03.997 }, 00:23:03.997 { 00:23:03.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.997 "dma_device_type": 2 00:23:03.997 } 00:23:03.997 ], 00:23:03.997 "driver_specific": {} 00:23:03.997 } 00:23:03.997 ] 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.997 "name": "Existed_Raid", 00:23:03.997 "uuid": "270a610a-6dd0-4b1f-9362-f62a79dfaae0", 00:23:03.997 "strip_size_kb": 64, 00:23:03.997 "state": "configuring", 00:23:03.997 "raid_level": "concat", 00:23:03.997 "superblock": true, 00:23:03.997 "num_base_bdevs": 3, 00:23:03.997 "num_base_bdevs_discovered": 1, 00:23:03.997 "num_base_bdevs_operational": 3, 00:23:03.997 "base_bdevs_list": [ 00:23:03.997 { 00:23:03.997 "name": "BaseBdev1", 00:23:03.997 "uuid": "cef6df9f-6350-4e31-8a09-cfb63a936191", 00:23:03.997 "is_configured": true, 00:23:03.997 "data_offset": 2048, 00:23:03.997 "data_size": 63488 00:23:03.997 }, 00:23:03.997 { 00:23:03.997 "name": "BaseBdev2", 00:23:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.997 "is_configured": false, 00:23:03.997 "data_offset": 0, 00:23:03.997 "data_size": 0 00:23:03.997 }, 00:23:03.997 { 00:23:03.997 "name": "BaseBdev3", 00:23:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.997 "is_configured": false, 00:23:03.997 "data_offset": 0, 00:23:03.997 "data_size": 0 00:23:03.997 } 00:23:03.997 ] 00:23:03.997 }' 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.997 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.563 [2024-11-20 07:20:28.589928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:04.563 [2024-11-20 07:20:28.589993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.563 [2024-11-20 07:20:28.597980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.563 [2024-11-20 07:20:28.600422] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:04.563 [2024-11-20 07:20:28.600476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:04.563 [2024-11-20 07:20:28.600493] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:04.563 [2024-11-20 07:20:28.600509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.563 "name": "Existed_Raid", 00:23:04.563 "uuid": "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9", 00:23:04.563 "strip_size_kb": 64, 00:23:04.563 "state": "configuring", 00:23:04.563 "raid_level": "concat", 00:23:04.563 "superblock": true, 00:23:04.563 "num_base_bdevs": 3, 00:23:04.563 "num_base_bdevs_discovered": 1, 00:23:04.563 "num_base_bdevs_operational": 3, 00:23:04.563 "base_bdevs_list": [ 00:23:04.563 { 00:23:04.563 "name": "BaseBdev1", 00:23:04.563 "uuid": "cef6df9f-6350-4e31-8a09-cfb63a936191", 00:23:04.563 "is_configured": true, 00:23:04.563 "data_offset": 2048, 00:23:04.563 "data_size": 63488 00:23:04.563 }, 00:23:04.563 { 00:23:04.563 "name": "BaseBdev2", 00:23:04.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.563 "is_configured": false, 00:23:04.563 "data_offset": 0, 00:23:04.563 "data_size": 0 00:23:04.563 }, 00:23:04.563 { 00:23:04.563 "name": "BaseBdev3", 00:23:04.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.563 "is_configured": false, 00:23:04.563 "data_offset": 0, 00:23:04.563 "data_size": 0 00:23:04.563 } 00:23:04.563 ] 00:23:04.563 }' 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.563 07:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.128 [2024-11-20 07:20:29.184546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.128 BaseBdev2 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.128 [ 00:23:05.128 { 00:23:05.128 "name": "BaseBdev2", 00:23:05.128 "aliases": [ 00:23:05.128 "f3ddaed6-c98d-4640-9a5b-83126473f959" 00:23:05.128 ], 00:23:05.128 "product_name": "Malloc disk", 00:23:05.128 "block_size": 512, 00:23:05.128 "num_blocks": 65536, 00:23:05.128 "uuid": "f3ddaed6-c98d-4640-9a5b-83126473f959", 00:23:05.128 "assigned_rate_limits": { 00:23:05.128 "rw_ios_per_sec": 0, 00:23:05.128 "rw_mbytes_per_sec": 0, 00:23:05.128 "r_mbytes_per_sec": 0, 00:23:05.128 "w_mbytes_per_sec": 0 00:23:05.128 }, 00:23:05.128 "claimed": true, 00:23:05.128 "claim_type": "exclusive_write", 00:23:05.128 "zoned": false, 00:23:05.128 "supported_io_types": { 00:23:05.128 "read": true, 00:23:05.128 "write": true, 00:23:05.128 "unmap": true, 00:23:05.128 "flush": true, 00:23:05.128 "reset": true, 00:23:05.128 "nvme_admin": false, 00:23:05.128 "nvme_io": false, 00:23:05.128 "nvme_io_md": false, 00:23:05.128 "write_zeroes": true, 00:23:05.128 "zcopy": true, 00:23:05.128 "get_zone_info": false, 00:23:05.128 "zone_management": false, 00:23:05.128 "zone_append": false, 00:23:05.128 "compare": false, 00:23:05.128 "compare_and_write": false, 00:23:05.128 "abort": true, 00:23:05.128 "seek_hole": false, 00:23:05.128 "seek_data": false, 00:23:05.128 "copy": true, 00:23:05.128 "nvme_iov_md": false 00:23:05.128 }, 00:23:05.128 "memory_domains": [ 00:23:05.128 { 00:23:05.128 "dma_device_id": "system", 00:23:05.128 "dma_device_type": 1 00:23:05.128 }, 00:23:05.128 { 00:23:05.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.128 "dma_device_type": 2 00:23:05.128 } 00:23:05.128 ], 00:23:05.128 "driver_specific": {} 00:23:05.128 } 00:23:05.128 ] 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.128 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.129 "name": "Existed_Raid", 00:23:05.129 "uuid": "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9", 00:23:05.129 "strip_size_kb": 64, 00:23:05.129 "state": "configuring", 00:23:05.129 "raid_level": "concat", 00:23:05.129 "superblock": true, 00:23:05.129 "num_base_bdevs": 3, 00:23:05.129 "num_base_bdevs_discovered": 2, 00:23:05.129 "num_base_bdevs_operational": 3, 00:23:05.129 "base_bdevs_list": [ 00:23:05.129 { 00:23:05.129 "name": "BaseBdev1", 00:23:05.129 "uuid": "cef6df9f-6350-4e31-8a09-cfb63a936191", 00:23:05.129 "is_configured": true, 00:23:05.129 "data_offset": 2048, 00:23:05.129 "data_size": 63488 00:23:05.129 }, 00:23:05.129 { 00:23:05.129 "name": "BaseBdev2", 00:23:05.129 "uuid": "f3ddaed6-c98d-4640-9a5b-83126473f959", 00:23:05.129 "is_configured": true, 00:23:05.129 "data_offset": 2048, 00:23:05.129 "data_size": 63488 00:23:05.129 }, 00:23:05.129 { 00:23:05.129 "name": "BaseBdev3", 00:23:05.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.129 "is_configured": false, 00:23:05.129 "data_offset": 0, 00:23:05.129 "data_size": 0 00:23:05.129 } 00:23:05.129 ] 00:23:05.129 }' 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.129 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.695 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:05.695 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.695 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.695 [2024-11-20 07:20:29.783017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:05.695 [2024-11-20 07:20:29.783337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:05.695 [2024-11-20 07:20:29.783370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:05.696 BaseBdev3 00:23:05.696 [2024-11-20 07:20:29.783765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:05.696 [2024-11-20 07:20:29.783969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:05.696 [2024-11-20 07:20:29.783993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:05.696 [2024-11-20 07:20:29.784191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.696 [ 00:23:05.696 { 00:23:05.696 "name": "BaseBdev3", 00:23:05.696 "aliases": [ 00:23:05.696 "714c2aa9-623e-43fa-801c-b216304ca890" 00:23:05.696 ], 00:23:05.696 "product_name": "Malloc disk", 00:23:05.696 "block_size": 512, 00:23:05.696 "num_blocks": 65536, 00:23:05.696 "uuid": "714c2aa9-623e-43fa-801c-b216304ca890", 00:23:05.696 "assigned_rate_limits": { 00:23:05.696 "rw_ios_per_sec": 0, 00:23:05.696 "rw_mbytes_per_sec": 0, 00:23:05.696 "r_mbytes_per_sec": 0, 00:23:05.696 "w_mbytes_per_sec": 0 00:23:05.696 }, 00:23:05.696 "claimed": true, 00:23:05.696 "claim_type": "exclusive_write", 00:23:05.696 "zoned": false, 00:23:05.696 "supported_io_types": { 00:23:05.696 "read": true, 00:23:05.696 "write": true, 00:23:05.696 "unmap": true, 00:23:05.696 "flush": true, 00:23:05.696 "reset": true, 00:23:05.696 "nvme_admin": false, 00:23:05.696 "nvme_io": false, 00:23:05.696 "nvme_io_md": false, 00:23:05.696 "write_zeroes": true, 00:23:05.696 "zcopy": true, 00:23:05.696 "get_zone_info": false, 00:23:05.696 "zone_management": false, 00:23:05.696 "zone_append": false, 00:23:05.696 "compare": false, 00:23:05.696 "compare_and_write": false, 00:23:05.696 "abort": true, 00:23:05.696 "seek_hole": false, 00:23:05.696 "seek_data": false, 00:23:05.696 "copy": true, 00:23:05.696 "nvme_iov_md": false 00:23:05.696 }, 00:23:05.696 "memory_domains": [ 00:23:05.696 { 00:23:05.696 "dma_device_id": "system", 00:23:05.696 "dma_device_type": 1 00:23:05.696 }, 00:23:05.696 { 00:23:05.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.696 "dma_device_type": 2 00:23:05.696 } 00:23:05.696 ], 00:23:05.696 "driver_specific": {} 00:23:05.696 } 00:23:05.696 ] 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.696 "name": "Existed_Raid", 00:23:05.696 "uuid": "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9", 00:23:05.696 "strip_size_kb": 64, 00:23:05.696 "state": "online", 00:23:05.696 "raid_level": "concat", 00:23:05.696 "superblock": true, 00:23:05.696 "num_base_bdevs": 3, 00:23:05.696 "num_base_bdevs_discovered": 3, 00:23:05.696 "num_base_bdevs_operational": 3, 00:23:05.696 "base_bdevs_list": [ 00:23:05.696 { 00:23:05.696 "name": "BaseBdev1", 00:23:05.696 "uuid": "cef6df9f-6350-4e31-8a09-cfb63a936191", 00:23:05.696 "is_configured": true, 00:23:05.696 "data_offset": 2048, 00:23:05.696 "data_size": 63488 00:23:05.696 }, 00:23:05.696 { 00:23:05.696 "name": "BaseBdev2", 00:23:05.696 "uuid": "f3ddaed6-c98d-4640-9a5b-83126473f959", 00:23:05.696 "is_configured": true, 00:23:05.696 "data_offset": 2048, 00:23:05.696 "data_size": 63488 00:23:05.696 }, 00:23:05.696 { 00:23:05.696 "name": "BaseBdev3", 00:23:05.696 "uuid": "714c2aa9-623e-43fa-801c-b216304ca890", 00:23:05.696 "is_configured": true, 00:23:05.696 "data_offset": 2048, 00:23:05.696 "data_size": 63488 00:23:05.696 } 00:23:05.696 ] 00:23:05.696 }' 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.696 07:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.263 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.264 [2024-11-20 07:20:30.347633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:06.264 "name": "Existed_Raid", 00:23:06.264 "aliases": [ 00:23:06.264 "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9" 00:23:06.264 ], 00:23:06.264 "product_name": "Raid Volume", 00:23:06.264 "block_size": 512, 00:23:06.264 "num_blocks": 190464, 00:23:06.264 "uuid": "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9", 00:23:06.264 "assigned_rate_limits": { 00:23:06.264 "rw_ios_per_sec": 0, 00:23:06.264 "rw_mbytes_per_sec": 0, 00:23:06.264 "r_mbytes_per_sec": 0, 00:23:06.264 "w_mbytes_per_sec": 0 00:23:06.264 }, 00:23:06.264 "claimed": false, 00:23:06.264 "zoned": false, 00:23:06.264 "supported_io_types": { 00:23:06.264 "read": true, 00:23:06.264 "write": true, 00:23:06.264 "unmap": true, 00:23:06.264 "flush": true, 00:23:06.264 "reset": true, 00:23:06.264 "nvme_admin": false, 00:23:06.264 "nvme_io": false, 00:23:06.264 "nvme_io_md": false, 00:23:06.264 "write_zeroes": true, 00:23:06.264 "zcopy": false, 00:23:06.264 "get_zone_info": false, 00:23:06.264 "zone_management": false, 00:23:06.264 "zone_append": false, 00:23:06.264 "compare": false, 00:23:06.264 "compare_and_write": false, 00:23:06.264 "abort": false, 00:23:06.264 "seek_hole": false, 00:23:06.264 "seek_data": false, 00:23:06.264 "copy": false, 00:23:06.264 "nvme_iov_md": false 00:23:06.264 }, 00:23:06.264 "memory_domains": [ 00:23:06.264 { 00:23:06.264 "dma_device_id": "system", 00:23:06.264 "dma_device_type": 1 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.264 "dma_device_type": 2 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "dma_device_id": "system", 00:23:06.264 "dma_device_type": 1 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.264 "dma_device_type": 2 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "dma_device_id": "system", 00:23:06.264 "dma_device_type": 1 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.264 "dma_device_type": 2 00:23:06.264 } 00:23:06.264 ], 00:23:06.264 "driver_specific": { 00:23:06.264 "raid": { 00:23:06.264 "uuid": "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9", 00:23:06.264 "strip_size_kb": 64, 00:23:06.264 "state": "online", 00:23:06.264 "raid_level": "concat", 00:23:06.264 "superblock": true, 00:23:06.264 "num_base_bdevs": 3, 00:23:06.264 "num_base_bdevs_discovered": 3, 00:23:06.264 "num_base_bdevs_operational": 3, 00:23:06.264 "base_bdevs_list": [ 00:23:06.264 { 00:23:06.264 "name": "BaseBdev1", 00:23:06.264 "uuid": "cef6df9f-6350-4e31-8a09-cfb63a936191", 00:23:06.264 "is_configured": true, 00:23:06.264 "data_offset": 2048, 00:23:06.264 "data_size": 63488 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "name": "BaseBdev2", 00:23:06.264 "uuid": "f3ddaed6-c98d-4640-9a5b-83126473f959", 00:23:06.264 "is_configured": true, 00:23:06.264 "data_offset": 2048, 00:23:06.264 "data_size": 63488 00:23:06.264 }, 00:23:06.264 { 00:23:06.264 "name": "BaseBdev3", 00:23:06.264 "uuid": "714c2aa9-623e-43fa-801c-b216304ca890", 00:23:06.264 "is_configured": true, 00:23:06.264 "data_offset": 2048, 00:23:06.264 "data_size": 63488 00:23:06.264 } 00:23:06.264 ] 00:23:06.264 } 00:23:06.264 } 00:23:06.264 }' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:06.264 BaseBdev2 00:23:06.264 BaseBdev3' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.264 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.523 [2024-11-20 07:20:30.659357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:06.523 [2024-11-20 07:20:30.659390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:06.523 [2024-11-20 07:20:30.659456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.523 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.523 "name": "Existed_Raid", 00:23:06.523 "uuid": "49356fb3-b28f-408b-a67b-f1f4a2ef1ca9", 00:23:06.523 "strip_size_kb": 64, 00:23:06.523 "state": "offline", 00:23:06.523 "raid_level": "concat", 00:23:06.523 "superblock": true, 00:23:06.523 "num_base_bdevs": 3, 00:23:06.523 "num_base_bdevs_discovered": 2, 00:23:06.523 "num_base_bdevs_operational": 2, 00:23:06.523 "base_bdevs_list": [ 00:23:06.523 { 00:23:06.523 "name": null, 00:23:06.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.523 "is_configured": false, 00:23:06.523 "data_offset": 0, 00:23:06.523 "data_size": 63488 00:23:06.523 }, 00:23:06.523 { 00:23:06.523 "name": "BaseBdev2", 00:23:06.523 "uuid": "f3ddaed6-c98d-4640-9a5b-83126473f959", 00:23:06.523 "is_configured": true, 00:23:06.523 "data_offset": 2048, 00:23:06.523 "data_size": 63488 00:23:06.523 }, 00:23:06.523 { 00:23:06.523 "name": "BaseBdev3", 00:23:06.524 "uuid": "714c2aa9-623e-43fa-801c-b216304ca890", 00:23:06.524 "is_configured": true, 00:23:06.524 "data_offset": 2048, 00:23:06.524 "data_size": 63488 00:23:06.524 } 00:23:06.524 ] 00:23:06.524 }' 00:23:06.524 07:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.524 07:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.091 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.091 [2024-11-20 07:20:31.344065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.350 [2024-11-20 07:20:31.480532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:07.350 [2024-11-20 07:20:31.480763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.350 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.610 BaseBdev2 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.610 [ 00:23:07.610 { 00:23:07.610 "name": "BaseBdev2", 00:23:07.610 "aliases": [ 00:23:07.610 "76d19b30-f157-4003-95bd-05053e858bbc" 00:23:07.610 ], 00:23:07.610 "product_name": "Malloc disk", 00:23:07.610 "block_size": 512, 00:23:07.610 "num_blocks": 65536, 00:23:07.610 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:07.610 "assigned_rate_limits": { 00:23:07.610 "rw_ios_per_sec": 0, 00:23:07.610 "rw_mbytes_per_sec": 0, 00:23:07.610 "r_mbytes_per_sec": 0, 00:23:07.610 "w_mbytes_per_sec": 0 00:23:07.610 }, 00:23:07.610 "claimed": false, 00:23:07.610 "zoned": false, 00:23:07.610 "supported_io_types": { 00:23:07.610 "read": true, 00:23:07.610 "write": true, 00:23:07.610 "unmap": true, 00:23:07.610 "flush": true, 00:23:07.610 "reset": true, 00:23:07.610 "nvme_admin": false, 00:23:07.610 "nvme_io": false, 00:23:07.610 "nvme_io_md": false, 00:23:07.610 "write_zeroes": true, 00:23:07.610 "zcopy": true, 00:23:07.610 "get_zone_info": false, 00:23:07.610 "zone_management": false, 00:23:07.610 "zone_append": false, 00:23:07.610 "compare": false, 00:23:07.610 "compare_and_write": false, 00:23:07.610 "abort": true, 00:23:07.610 "seek_hole": false, 00:23:07.610 "seek_data": false, 00:23:07.610 "copy": true, 00:23:07.610 "nvme_iov_md": false 00:23:07.610 }, 00:23:07.610 "memory_domains": [ 00:23:07.610 { 00:23:07.610 "dma_device_id": "system", 00:23:07.610 "dma_device_type": 1 00:23:07.610 }, 00:23:07.610 { 00:23:07.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.610 "dma_device_type": 2 00:23:07.610 } 00:23:07.610 ], 00:23:07.610 "driver_specific": {} 00:23:07.610 } 00:23:07.610 ] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.610 BaseBdev3 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.610 [ 00:23:07.610 { 00:23:07.610 "name": "BaseBdev3", 00:23:07.610 "aliases": [ 00:23:07.610 "e8c2830d-520a-46fa-b7ab-587cb9d66ab5" 00:23:07.610 ], 00:23:07.610 "product_name": "Malloc disk", 00:23:07.610 "block_size": 512, 00:23:07.610 "num_blocks": 65536, 00:23:07.610 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:07.610 "assigned_rate_limits": { 00:23:07.610 "rw_ios_per_sec": 0, 00:23:07.610 "rw_mbytes_per_sec": 0, 00:23:07.610 "r_mbytes_per_sec": 0, 00:23:07.610 "w_mbytes_per_sec": 0 00:23:07.610 }, 00:23:07.610 "claimed": false, 00:23:07.610 "zoned": false, 00:23:07.610 "supported_io_types": { 00:23:07.610 "read": true, 00:23:07.610 "write": true, 00:23:07.610 "unmap": true, 00:23:07.610 "flush": true, 00:23:07.610 "reset": true, 00:23:07.610 "nvme_admin": false, 00:23:07.610 "nvme_io": false, 00:23:07.610 "nvme_io_md": false, 00:23:07.610 "write_zeroes": true, 00:23:07.610 "zcopy": true, 00:23:07.610 "get_zone_info": false, 00:23:07.610 "zone_management": false, 00:23:07.610 "zone_append": false, 00:23:07.610 "compare": false, 00:23:07.610 "compare_and_write": false, 00:23:07.610 "abort": true, 00:23:07.610 "seek_hole": false, 00:23:07.610 "seek_data": false, 00:23:07.610 "copy": true, 00:23:07.610 "nvme_iov_md": false 00:23:07.610 }, 00:23:07.610 "memory_domains": [ 00:23:07.610 { 00:23:07.610 "dma_device_id": "system", 00:23:07.610 "dma_device_type": 1 00:23:07.610 }, 00:23:07.610 { 00:23:07.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.610 "dma_device_type": 2 00:23:07.610 } 00:23:07.610 ], 00:23:07.610 "driver_specific": {} 00:23:07.610 } 00:23:07.610 ] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:07.610 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.611 [2024-11-20 07:20:31.771370] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.611 [2024-11-20 07:20:31.771576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.611 [2024-11-20 07:20:31.771732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.611 [2024-11-20 07:20:31.774152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.611 "name": "Existed_Raid", 00:23:07.611 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:07.611 "strip_size_kb": 64, 00:23:07.611 "state": "configuring", 00:23:07.611 "raid_level": "concat", 00:23:07.611 "superblock": true, 00:23:07.611 "num_base_bdevs": 3, 00:23:07.611 "num_base_bdevs_discovered": 2, 00:23:07.611 "num_base_bdevs_operational": 3, 00:23:07.611 "base_bdevs_list": [ 00:23:07.611 { 00:23:07.611 "name": "BaseBdev1", 00:23:07.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.611 "is_configured": false, 00:23:07.611 "data_offset": 0, 00:23:07.611 "data_size": 0 00:23:07.611 }, 00:23:07.611 { 00:23:07.611 "name": "BaseBdev2", 00:23:07.611 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:07.611 "is_configured": true, 00:23:07.611 "data_offset": 2048, 00:23:07.611 "data_size": 63488 00:23:07.611 }, 00:23:07.611 { 00:23:07.611 "name": "BaseBdev3", 00:23:07.611 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:07.611 "is_configured": true, 00:23:07.611 "data_offset": 2048, 00:23:07.611 "data_size": 63488 00:23:07.611 } 00:23:07.611 ] 00:23:07.611 }' 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.611 07:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.178 [2024-11-20 07:20:32.279549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.178 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.178 "name": "Existed_Raid", 00:23:08.178 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:08.178 "strip_size_kb": 64, 00:23:08.178 "state": "configuring", 00:23:08.178 "raid_level": "concat", 00:23:08.178 "superblock": true, 00:23:08.178 "num_base_bdevs": 3, 00:23:08.178 "num_base_bdevs_discovered": 1, 00:23:08.178 "num_base_bdevs_operational": 3, 00:23:08.178 "base_bdevs_list": [ 00:23:08.178 { 00:23:08.178 "name": "BaseBdev1", 00:23:08.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.179 "is_configured": false, 00:23:08.179 "data_offset": 0, 00:23:08.179 "data_size": 0 00:23:08.179 }, 00:23:08.179 { 00:23:08.179 "name": null, 00:23:08.179 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:08.179 "is_configured": false, 00:23:08.179 "data_offset": 0, 00:23:08.179 "data_size": 63488 00:23:08.179 }, 00:23:08.179 { 00:23:08.179 "name": "BaseBdev3", 00:23:08.179 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:08.179 "is_configured": true, 00:23:08.179 "data_offset": 2048, 00:23:08.179 "data_size": 63488 00:23:08.179 } 00:23:08.179 ] 00:23:08.179 }' 00:23:08.179 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.179 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.746 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 [2024-11-20 07:20:32.894437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.747 BaseBdev1 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 [ 00:23:08.747 { 00:23:08.747 "name": "BaseBdev1", 00:23:08.747 "aliases": [ 00:23:08.747 "a1be8ba6-2530-4f22-ab7f-722f0451f084" 00:23:08.747 ], 00:23:08.747 "product_name": "Malloc disk", 00:23:08.747 "block_size": 512, 00:23:08.747 "num_blocks": 65536, 00:23:08.747 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:08.747 "assigned_rate_limits": { 00:23:08.747 "rw_ios_per_sec": 0, 00:23:08.747 "rw_mbytes_per_sec": 0, 00:23:08.747 "r_mbytes_per_sec": 0, 00:23:08.747 "w_mbytes_per_sec": 0 00:23:08.747 }, 00:23:08.747 "claimed": true, 00:23:08.747 "claim_type": "exclusive_write", 00:23:08.747 "zoned": false, 00:23:08.747 "supported_io_types": { 00:23:08.747 "read": true, 00:23:08.747 "write": true, 00:23:08.747 "unmap": true, 00:23:08.747 "flush": true, 00:23:08.747 "reset": true, 00:23:08.747 "nvme_admin": false, 00:23:08.747 "nvme_io": false, 00:23:08.747 "nvme_io_md": false, 00:23:08.747 "write_zeroes": true, 00:23:08.747 "zcopy": true, 00:23:08.747 "get_zone_info": false, 00:23:08.747 "zone_management": false, 00:23:08.747 "zone_append": false, 00:23:08.747 "compare": false, 00:23:08.747 "compare_and_write": false, 00:23:08.747 "abort": true, 00:23:08.747 "seek_hole": false, 00:23:08.747 "seek_data": false, 00:23:08.747 "copy": true, 00:23:08.747 "nvme_iov_md": false 00:23:08.747 }, 00:23:08.747 "memory_domains": [ 00:23:08.747 { 00:23:08.747 "dma_device_id": "system", 00:23:08.747 "dma_device_type": 1 00:23:08.747 }, 00:23:08.747 { 00:23:08.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.747 "dma_device_type": 2 00:23:08.747 } 00:23:08.747 ], 00:23:08.747 "driver_specific": {} 00:23:08.747 } 00:23:08.747 ] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.747 "name": "Existed_Raid", 00:23:08.747 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:08.747 "strip_size_kb": 64, 00:23:08.747 "state": "configuring", 00:23:08.747 "raid_level": "concat", 00:23:08.747 "superblock": true, 00:23:08.747 "num_base_bdevs": 3, 00:23:08.747 "num_base_bdevs_discovered": 2, 00:23:08.747 "num_base_bdevs_operational": 3, 00:23:08.747 "base_bdevs_list": [ 00:23:08.747 { 00:23:08.747 "name": "BaseBdev1", 00:23:08.747 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:08.747 "is_configured": true, 00:23:08.747 "data_offset": 2048, 00:23:08.747 "data_size": 63488 00:23:08.747 }, 00:23:08.747 { 00:23:08.747 "name": null, 00:23:08.747 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:08.747 "is_configured": false, 00:23:08.747 "data_offset": 0, 00:23:08.747 "data_size": 63488 00:23:08.747 }, 00:23:08.747 { 00:23:08.747 "name": "BaseBdev3", 00:23:08.747 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:08.747 "is_configured": true, 00:23:08.747 "data_offset": 2048, 00:23:08.747 "data_size": 63488 00:23:08.747 } 00:23:08.747 ] 00:23:08.747 }' 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.747 07:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.313 [2024-11-20 07:20:33.530684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.313 "name": "Existed_Raid", 00:23:09.313 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:09.313 "strip_size_kb": 64, 00:23:09.313 "state": "configuring", 00:23:09.313 "raid_level": "concat", 00:23:09.313 "superblock": true, 00:23:09.313 "num_base_bdevs": 3, 00:23:09.313 "num_base_bdevs_discovered": 1, 00:23:09.313 "num_base_bdevs_operational": 3, 00:23:09.313 "base_bdevs_list": [ 00:23:09.313 { 00:23:09.313 "name": "BaseBdev1", 00:23:09.313 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:09.313 "is_configured": true, 00:23:09.313 "data_offset": 2048, 00:23:09.313 "data_size": 63488 00:23:09.313 }, 00:23:09.313 { 00:23:09.313 "name": null, 00:23:09.313 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:09.313 "is_configured": false, 00:23:09.313 "data_offset": 0, 00:23:09.313 "data_size": 63488 00:23:09.313 }, 00:23:09.313 { 00:23:09.313 "name": null, 00:23:09.313 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:09.313 "is_configured": false, 00:23:09.313 "data_offset": 0, 00:23:09.313 "data_size": 63488 00:23:09.313 } 00:23:09.313 ] 00:23:09.313 }' 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.313 07:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.883 [2024-11-20 07:20:34.082878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.883 "name": "Existed_Raid", 00:23:09.883 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:09.883 "strip_size_kb": 64, 00:23:09.883 "state": "configuring", 00:23:09.883 "raid_level": "concat", 00:23:09.883 "superblock": true, 00:23:09.883 "num_base_bdevs": 3, 00:23:09.883 "num_base_bdevs_discovered": 2, 00:23:09.883 "num_base_bdevs_operational": 3, 00:23:09.883 "base_bdevs_list": [ 00:23:09.883 { 00:23:09.883 "name": "BaseBdev1", 00:23:09.883 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:09.883 "is_configured": true, 00:23:09.883 "data_offset": 2048, 00:23:09.883 "data_size": 63488 00:23:09.883 }, 00:23:09.883 { 00:23:09.883 "name": null, 00:23:09.883 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:09.883 "is_configured": false, 00:23:09.883 "data_offset": 0, 00:23:09.883 "data_size": 63488 00:23:09.883 }, 00:23:09.883 { 00:23:09.883 "name": "BaseBdev3", 00:23:09.883 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:09.883 "is_configured": true, 00:23:09.883 "data_offset": 2048, 00:23:09.883 "data_size": 63488 00:23:09.883 } 00:23:09.883 ] 00:23:09.883 }' 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.883 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.450 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.450 [2024-11-20 07:20:34.667158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:10.707 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.708 "name": "Existed_Raid", 00:23:10.708 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:10.708 "strip_size_kb": 64, 00:23:10.708 "state": "configuring", 00:23:10.708 "raid_level": "concat", 00:23:10.708 "superblock": true, 00:23:10.708 "num_base_bdevs": 3, 00:23:10.708 "num_base_bdevs_discovered": 1, 00:23:10.708 "num_base_bdevs_operational": 3, 00:23:10.708 "base_bdevs_list": [ 00:23:10.708 { 00:23:10.708 "name": null, 00:23:10.708 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:10.708 "is_configured": false, 00:23:10.708 "data_offset": 0, 00:23:10.708 "data_size": 63488 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "name": null, 00:23:10.708 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:10.708 "is_configured": false, 00:23:10.708 "data_offset": 0, 00:23:10.708 "data_size": 63488 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "name": "BaseBdev3", 00:23:10.708 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:10.708 "is_configured": true, 00:23:10.708 "data_offset": 2048, 00:23:10.708 "data_size": 63488 00:23:10.708 } 00:23:10.708 ] 00:23:10.708 }' 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.708 07:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:11.273 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.274 [2024-11-20 07:20:35.345236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.274 "name": "Existed_Raid", 00:23:11.274 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:11.274 "strip_size_kb": 64, 00:23:11.274 "state": "configuring", 00:23:11.274 "raid_level": "concat", 00:23:11.274 "superblock": true, 00:23:11.274 "num_base_bdevs": 3, 00:23:11.274 "num_base_bdevs_discovered": 2, 00:23:11.274 "num_base_bdevs_operational": 3, 00:23:11.274 "base_bdevs_list": [ 00:23:11.274 { 00:23:11.274 "name": null, 00:23:11.274 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:11.274 "is_configured": false, 00:23:11.274 "data_offset": 0, 00:23:11.274 "data_size": 63488 00:23:11.274 }, 00:23:11.274 { 00:23:11.274 "name": "BaseBdev2", 00:23:11.274 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:11.274 "is_configured": true, 00:23:11.274 "data_offset": 2048, 00:23:11.274 "data_size": 63488 00:23:11.274 }, 00:23:11.274 { 00:23:11.274 "name": "BaseBdev3", 00:23:11.274 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:11.274 "is_configured": true, 00:23:11.274 "data_offset": 2048, 00:23:11.274 "data_size": 63488 00:23:11.274 } 00:23:11.274 ] 00:23:11.274 }' 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.274 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.839 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.840 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 07:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:11.840 07:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a1be8ba6-2530-4f22-ab7f-722f0451f084 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 [2024-11-20 07:20:36.062888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:11.840 [2024-11-20 07:20:36.063157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:11.840 [2024-11-20 07:20:36.063181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:11.840 [2024-11-20 07:20:36.063485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:11.840 NewBaseBdev 00:23:11.840 [2024-11-20 07:20:36.063698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:11.840 [2024-11-20 07:20:36.063716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:11.840 [2024-11-20 07:20:36.063884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 [ 00:23:11.840 { 00:23:11.840 "name": "NewBaseBdev", 00:23:11.840 "aliases": [ 00:23:11.840 "a1be8ba6-2530-4f22-ab7f-722f0451f084" 00:23:11.840 ], 00:23:11.840 "product_name": "Malloc disk", 00:23:11.840 "block_size": 512, 00:23:11.840 "num_blocks": 65536, 00:23:11.840 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:11.840 "assigned_rate_limits": { 00:23:11.840 "rw_ios_per_sec": 0, 00:23:11.840 "rw_mbytes_per_sec": 0, 00:23:11.840 "r_mbytes_per_sec": 0, 00:23:11.840 "w_mbytes_per_sec": 0 00:23:11.840 }, 00:23:11.840 "claimed": true, 00:23:11.840 "claim_type": "exclusive_write", 00:23:11.840 "zoned": false, 00:23:11.840 "supported_io_types": { 00:23:11.840 "read": true, 00:23:11.840 "write": true, 00:23:11.840 "unmap": true, 00:23:11.840 "flush": true, 00:23:11.840 "reset": true, 00:23:11.840 "nvme_admin": false, 00:23:11.840 "nvme_io": false, 00:23:11.840 "nvme_io_md": false, 00:23:11.840 "write_zeroes": true, 00:23:11.840 "zcopy": true, 00:23:11.840 "get_zone_info": false, 00:23:11.840 "zone_management": false, 00:23:11.840 "zone_append": false, 00:23:11.840 "compare": false, 00:23:11.840 "compare_and_write": false, 00:23:11.840 "abort": true, 00:23:11.840 "seek_hole": false, 00:23:11.840 "seek_data": false, 00:23:11.840 "copy": true, 00:23:11.840 "nvme_iov_md": false 00:23:11.840 }, 00:23:11.840 "memory_domains": [ 00:23:11.840 { 00:23:11.840 "dma_device_id": "system", 00:23:11.840 "dma_device_type": 1 00:23:11.840 }, 00:23:11.840 { 00:23:11.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.840 "dma_device_type": 2 00:23:11.840 } 00:23:11.840 ], 00:23:11.840 "driver_specific": {} 00:23:11.840 } 00:23:11.840 ] 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.098 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.098 "name": "Existed_Raid", 00:23:12.098 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:12.098 "strip_size_kb": 64, 00:23:12.098 "state": "online", 00:23:12.098 "raid_level": "concat", 00:23:12.098 "superblock": true, 00:23:12.098 "num_base_bdevs": 3, 00:23:12.098 "num_base_bdevs_discovered": 3, 00:23:12.098 "num_base_bdevs_operational": 3, 00:23:12.098 "base_bdevs_list": [ 00:23:12.098 { 00:23:12.098 "name": "NewBaseBdev", 00:23:12.098 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:12.098 "is_configured": true, 00:23:12.098 "data_offset": 2048, 00:23:12.098 "data_size": 63488 00:23:12.098 }, 00:23:12.098 { 00:23:12.098 "name": "BaseBdev2", 00:23:12.098 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:12.098 "is_configured": true, 00:23:12.098 "data_offset": 2048, 00:23:12.098 "data_size": 63488 00:23:12.098 }, 00:23:12.098 { 00:23:12.098 "name": "BaseBdev3", 00:23:12.098 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:12.098 "is_configured": true, 00:23:12.098 "data_offset": 2048, 00:23:12.098 "data_size": 63488 00:23:12.098 } 00:23:12.098 ] 00:23:12.098 }' 00:23:12.098 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.098 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.356 [2024-11-20 07:20:36.579456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:12.356 "name": "Existed_Raid", 00:23:12.356 "aliases": [ 00:23:12.356 "942850aa-4da6-4715-9238-21d7ae286c7c" 00:23:12.356 ], 00:23:12.356 "product_name": "Raid Volume", 00:23:12.356 "block_size": 512, 00:23:12.356 "num_blocks": 190464, 00:23:12.356 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:12.356 "assigned_rate_limits": { 00:23:12.356 "rw_ios_per_sec": 0, 00:23:12.356 "rw_mbytes_per_sec": 0, 00:23:12.356 "r_mbytes_per_sec": 0, 00:23:12.356 "w_mbytes_per_sec": 0 00:23:12.356 }, 00:23:12.356 "claimed": false, 00:23:12.356 "zoned": false, 00:23:12.356 "supported_io_types": { 00:23:12.356 "read": true, 00:23:12.356 "write": true, 00:23:12.356 "unmap": true, 00:23:12.356 "flush": true, 00:23:12.356 "reset": true, 00:23:12.356 "nvme_admin": false, 00:23:12.356 "nvme_io": false, 00:23:12.356 "nvme_io_md": false, 00:23:12.356 "write_zeroes": true, 00:23:12.356 "zcopy": false, 00:23:12.356 "get_zone_info": false, 00:23:12.356 "zone_management": false, 00:23:12.356 "zone_append": false, 00:23:12.356 "compare": false, 00:23:12.356 "compare_and_write": false, 00:23:12.356 "abort": false, 00:23:12.356 "seek_hole": false, 00:23:12.356 "seek_data": false, 00:23:12.356 "copy": false, 00:23:12.356 "nvme_iov_md": false 00:23:12.356 }, 00:23:12.356 "memory_domains": [ 00:23:12.356 { 00:23:12.356 "dma_device_id": "system", 00:23:12.356 "dma_device_type": 1 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.356 "dma_device_type": 2 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "dma_device_id": "system", 00:23:12.356 "dma_device_type": 1 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.356 "dma_device_type": 2 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "dma_device_id": "system", 00:23:12.356 "dma_device_type": 1 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.356 "dma_device_type": 2 00:23:12.356 } 00:23:12.356 ], 00:23:12.356 "driver_specific": { 00:23:12.356 "raid": { 00:23:12.356 "uuid": "942850aa-4da6-4715-9238-21d7ae286c7c", 00:23:12.356 "strip_size_kb": 64, 00:23:12.356 "state": "online", 00:23:12.356 "raid_level": "concat", 00:23:12.356 "superblock": true, 00:23:12.356 "num_base_bdevs": 3, 00:23:12.356 "num_base_bdevs_discovered": 3, 00:23:12.356 "num_base_bdevs_operational": 3, 00:23:12.356 "base_bdevs_list": [ 00:23:12.356 { 00:23:12.356 "name": "NewBaseBdev", 00:23:12.356 "uuid": "a1be8ba6-2530-4f22-ab7f-722f0451f084", 00:23:12.356 "is_configured": true, 00:23:12.356 "data_offset": 2048, 00:23:12.356 "data_size": 63488 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "name": "BaseBdev2", 00:23:12.356 "uuid": "76d19b30-f157-4003-95bd-05053e858bbc", 00:23:12.356 "is_configured": true, 00:23:12.356 "data_offset": 2048, 00:23:12.356 "data_size": 63488 00:23:12.356 }, 00:23:12.356 { 00:23:12.356 "name": "BaseBdev3", 00:23:12.356 "uuid": "e8c2830d-520a-46fa-b7ab-587cb9d66ab5", 00:23:12.356 "is_configured": true, 00:23:12.356 "data_offset": 2048, 00:23:12.356 "data_size": 63488 00:23:12.356 } 00:23:12.356 ] 00:23:12.356 } 00:23:12.356 } 00:23:12.356 }' 00:23:12.356 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:12.615 BaseBdev2 00:23:12.615 BaseBdev3' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.615 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.873 [2024-11-20 07:20:36.911163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:12.873 [2024-11-20 07:20:36.911313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:12.873 [2024-11-20 07:20:36.911428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.873 [2024-11-20 07:20:36.911504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:12.873 [2024-11-20 07:20:36.911524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66473 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66473 ']' 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66473 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66473 00:23:12.873 killing process with pid 66473 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66473' 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66473 00:23:12.873 [2024-11-20 07:20:36.950983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.873 07:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66473 00:23:13.132 [2024-11-20 07:20:37.216779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.068 07:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:14.068 00:23:14.068 real 0m11.827s 00:23:14.068 user 0m19.745s 00:23:14.068 sys 0m1.552s 00:23:14.068 07:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.068 ************************************ 00:23:14.068 END TEST raid_state_function_test_sb 00:23:14.068 07:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.068 ************************************ 00:23:14.068 07:20:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:23:14.068 07:20:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:14.068 07:20:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.068 07:20:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:14.068 ************************************ 00:23:14.068 START TEST raid_superblock_test 00:23:14.068 ************************************ 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67111 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67111 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67111 ']' 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.068 07:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.326 [2024-11-20 07:20:38.397291] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:14.326 [2024-11-20 07:20:38.397730] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67111 ] 00:23:14.326 [2024-11-20 07:20:38.581706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.585 [2024-11-20 07:20:38.711383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.845 [2024-11-20 07:20:38.916128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.845 [2024-11-20 07:20:38.916307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 malloc1 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 [2024-11-20 07:20:39.443020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:15.411 [2024-11-20 07:20:39.443254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.411 [2024-11-20 07:20:39.443347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:15.411 [2024-11-20 07:20:39.443480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.411 [2024-11-20 07:20:39.446484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.411 [2024-11-20 07:20:39.446668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:15.411 pt1 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 malloc2 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 [2024-11-20 07:20:39.499886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:15.411 [2024-11-20 07:20:39.499996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.411 [2024-11-20 07:20:39.500031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:15.411 [2024-11-20 07:20:39.500046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.411 [2024-11-20 07:20:39.503312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.411 [2024-11-20 07:20:39.503360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:15.411 pt2 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 malloc3 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 [2024-11-20 07:20:39.568017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:15.411 [2024-11-20 07:20:39.568239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.411 [2024-11-20 07:20:39.568320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:15.411 [2024-11-20 07:20:39.568430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.411 [2024-11-20 07:20:39.571243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.411 [2024-11-20 07:20:39.571399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:15.411 pt3 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.411 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.411 [2024-11-20 07:20:39.580171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:15.411 [2024-11-20 07:20:39.582606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:15.411 [2024-11-20 07:20:39.582701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:15.412 [2024-11-20 07:20:39.582945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:15.412 [2024-11-20 07:20:39.582969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:15.412 [2024-11-20 07:20:39.583284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:15.412 [2024-11-20 07:20:39.583522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:15.412 [2024-11-20 07:20:39.583540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:15.412 [2024-11-20 07:20:39.583751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.412 "name": "raid_bdev1", 00:23:15.412 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:15.412 "strip_size_kb": 64, 00:23:15.412 "state": "online", 00:23:15.412 "raid_level": "concat", 00:23:15.412 "superblock": true, 00:23:15.412 "num_base_bdevs": 3, 00:23:15.412 "num_base_bdevs_discovered": 3, 00:23:15.412 "num_base_bdevs_operational": 3, 00:23:15.412 "base_bdevs_list": [ 00:23:15.412 { 00:23:15.412 "name": "pt1", 00:23:15.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.412 "is_configured": true, 00:23:15.412 "data_offset": 2048, 00:23:15.412 "data_size": 63488 00:23:15.412 }, 00:23:15.412 { 00:23:15.412 "name": "pt2", 00:23:15.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.412 "is_configured": true, 00:23:15.412 "data_offset": 2048, 00:23:15.412 "data_size": 63488 00:23:15.412 }, 00:23:15.412 { 00:23:15.412 "name": "pt3", 00:23:15.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:15.412 "is_configured": true, 00:23:15.412 "data_offset": 2048, 00:23:15.412 "data_size": 63488 00:23:15.412 } 00:23:15.412 ] 00:23:15.412 }' 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.412 07:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.978 [2024-11-20 07:20:40.092660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.978 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:15.978 "name": "raid_bdev1", 00:23:15.978 "aliases": [ 00:23:15.978 "ea3a4bc7-0a55-412b-bf75-445c16bc7898" 00:23:15.978 ], 00:23:15.978 "product_name": "Raid Volume", 00:23:15.978 "block_size": 512, 00:23:15.978 "num_blocks": 190464, 00:23:15.978 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:15.978 "assigned_rate_limits": { 00:23:15.978 "rw_ios_per_sec": 0, 00:23:15.978 "rw_mbytes_per_sec": 0, 00:23:15.978 "r_mbytes_per_sec": 0, 00:23:15.978 "w_mbytes_per_sec": 0 00:23:15.978 }, 00:23:15.978 "claimed": false, 00:23:15.978 "zoned": false, 00:23:15.978 "supported_io_types": { 00:23:15.978 "read": true, 00:23:15.978 "write": true, 00:23:15.978 "unmap": true, 00:23:15.978 "flush": true, 00:23:15.978 "reset": true, 00:23:15.978 "nvme_admin": false, 00:23:15.978 "nvme_io": false, 00:23:15.978 "nvme_io_md": false, 00:23:15.978 "write_zeroes": true, 00:23:15.978 "zcopy": false, 00:23:15.978 "get_zone_info": false, 00:23:15.978 "zone_management": false, 00:23:15.978 "zone_append": false, 00:23:15.978 "compare": false, 00:23:15.978 "compare_and_write": false, 00:23:15.978 "abort": false, 00:23:15.978 "seek_hole": false, 00:23:15.978 "seek_data": false, 00:23:15.978 "copy": false, 00:23:15.978 "nvme_iov_md": false 00:23:15.978 }, 00:23:15.978 "memory_domains": [ 00:23:15.978 { 00:23:15.978 "dma_device_id": "system", 00:23:15.978 "dma_device_type": 1 00:23:15.978 }, 00:23:15.978 { 00:23:15.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.978 "dma_device_type": 2 00:23:15.978 }, 00:23:15.978 { 00:23:15.978 "dma_device_id": "system", 00:23:15.979 "dma_device_type": 1 00:23:15.979 }, 00:23:15.979 { 00:23:15.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.979 "dma_device_type": 2 00:23:15.979 }, 00:23:15.979 { 00:23:15.979 "dma_device_id": "system", 00:23:15.979 "dma_device_type": 1 00:23:15.979 }, 00:23:15.979 { 00:23:15.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.979 "dma_device_type": 2 00:23:15.979 } 00:23:15.979 ], 00:23:15.979 "driver_specific": { 00:23:15.979 "raid": { 00:23:15.979 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:15.979 "strip_size_kb": 64, 00:23:15.979 "state": "online", 00:23:15.979 "raid_level": "concat", 00:23:15.979 "superblock": true, 00:23:15.979 "num_base_bdevs": 3, 00:23:15.979 "num_base_bdevs_discovered": 3, 00:23:15.979 "num_base_bdevs_operational": 3, 00:23:15.979 "base_bdevs_list": [ 00:23:15.979 { 00:23:15.979 "name": "pt1", 00:23:15.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.979 "is_configured": true, 00:23:15.979 "data_offset": 2048, 00:23:15.979 "data_size": 63488 00:23:15.979 }, 00:23:15.979 { 00:23:15.979 "name": "pt2", 00:23:15.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.979 "is_configured": true, 00:23:15.979 "data_offset": 2048, 00:23:15.979 "data_size": 63488 00:23:15.979 }, 00:23:15.979 { 00:23:15.979 "name": "pt3", 00:23:15.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:15.979 "is_configured": true, 00:23:15.979 "data_offset": 2048, 00:23:15.979 "data_size": 63488 00:23:15.979 } 00:23:15.979 ] 00:23:15.979 } 00:23:15.979 } 00:23:15.979 }' 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:15.979 pt2 00:23:15.979 pt3' 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:15.979 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.238 [2024-11-20 07:20:40.428717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ea3a4bc7-0a55-412b-bf75-445c16bc7898 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ea3a4bc7-0a55-412b-bf75-445c16bc7898 ']' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.238 [2024-11-20 07:20:40.472341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.238 [2024-11-20 07:20:40.472376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.238 [2024-11-20 07:20:40.472477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.238 [2024-11-20 07:20:40.472561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.238 [2024-11-20 07:20:40.472578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:16.238 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.498 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.498 [2024-11-20 07:20:40.620436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:16.498 [2024-11-20 07:20:40.622975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:16.498 [2024-11-20 07:20:40.623050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:16.498 [2024-11-20 07:20:40.623122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:16.498 [2024-11-20 07:20:40.623208] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:16.498 [2024-11-20 07:20:40.623242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:16.498 [2024-11-20 07:20:40.623270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.498 [2024-11-20 07:20:40.623284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:16.498 request: 00:23:16.498 { 00:23:16.498 "name": "raid_bdev1", 00:23:16.499 "raid_level": "concat", 00:23:16.499 "base_bdevs": [ 00:23:16.499 "malloc1", 00:23:16.499 "malloc2", 00:23:16.499 "malloc3" 00:23:16.499 ], 00:23:16.499 "strip_size_kb": 64, 00:23:16.499 "superblock": false, 00:23:16.499 "method": "bdev_raid_create", 00:23:16.499 "req_id": 1 00:23:16.499 } 00:23:16.499 Got JSON-RPC error response 00:23:16.499 response: 00:23:16.499 { 00:23:16.499 "code": -17, 00:23:16.499 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:16.499 } 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.499 [2024-11-20 07:20:40.684372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:16.499 [2024-11-20 07:20:40.684437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.499 [2024-11-20 07:20:40.684468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:16.499 [2024-11-20 07:20:40.684484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.499 [2024-11-20 07:20:40.687312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.499 [2024-11-20 07:20:40.687359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:16.499 [2024-11-20 07:20:40.687466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:16.499 [2024-11-20 07:20:40.687532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:16.499 pt1 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.499 "name": "raid_bdev1", 00:23:16.499 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:16.499 "strip_size_kb": 64, 00:23:16.499 "state": "configuring", 00:23:16.499 "raid_level": "concat", 00:23:16.499 "superblock": true, 00:23:16.499 "num_base_bdevs": 3, 00:23:16.499 "num_base_bdevs_discovered": 1, 00:23:16.499 "num_base_bdevs_operational": 3, 00:23:16.499 "base_bdevs_list": [ 00:23:16.499 { 00:23:16.499 "name": "pt1", 00:23:16.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:16.499 "is_configured": true, 00:23:16.499 "data_offset": 2048, 00:23:16.499 "data_size": 63488 00:23:16.499 }, 00:23:16.499 { 00:23:16.499 "name": null, 00:23:16.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.499 "is_configured": false, 00:23:16.499 "data_offset": 2048, 00:23:16.499 "data_size": 63488 00:23:16.499 }, 00:23:16.499 { 00:23:16.499 "name": null, 00:23:16.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:16.499 "is_configured": false, 00:23:16.499 "data_offset": 2048, 00:23:16.499 "data_size": 63488 00:23:16.499 } 00:23:16.499 ] 00:23:16.499 }' 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.499 07:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.064 [2024-11-20 07:20:41.224573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.064 [2024-11-20 07:20:41.224807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.064 [2024-11-20 07:20:41.224961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:17.064 [2024-11-20 07:20:41.225099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.064 [2024-11-20 07:20:41.225732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.064 [2024-11-20 07:20:41.225893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.064 [2024-11-20 07:20:41.226154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:17.064 [2024-11-20 07:20:41.226196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.064 pt2 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.064 [2024-11-20 07:20:41.232556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.064 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.064 "name": "raid_bdev1", 00:23:17.064 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:17.064 "strip_size_kb": 64, 00:23:17.064 "state": "configuring", 00:23:17.064 "raid_level": "concat", 00:23:17.064 "superblock": true, 00:23:17.064 "num_base_bdevs": 3, 00:23:17.064 "num_base_bdevs_discovered": 1, 00:23:17.064 "num_base_bdevs_operational": 3, 00:23:17.064 "base_bdevs_list": [ 00:23:17.064 { 00:23:17.064 "name": "pt1", 00:23:17.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:17.064 "is_configured": true, 00:23:17.064 "data_offset": 2048, 00:23:17.064 "data_size": 63488 00:23:17.064 }, 00:23:17.064 { 00:23:17.064 "name": null, 00:23:17.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.064 "is_configured": false, 00:23:17.064 "data_offset": 0, 00:23:17.064 "data_size": 63488 00:23:17.064 }, 00:23:17.064 { 00:23:17.064 "name": null, 00:23:17.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:17.064 "is_configured": false, 00:23:17.065 "data_offset": 2048, 00:23:17.065 "data_size": 63488 00:23:17.065 } 00:23:17.065 ] 00:23:17.065 }' 00:23:17.065 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.065 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.631 [2024-11-20 07:20:41.764690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.631 [2024-11-20 07:20:41.764776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.631 [2024-11-20 07:20:41.764805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:17.631 [2024-11-20 07:20:41.764824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.631 [2024-11-20 07:20:41.765421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.631 [2024-11-20 07:20:41.765463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.631 [2024-11-20 07:20:41.765565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:17.631 [2024-11-20 07:20:41.765618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.631 pt2 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.631 [2024-11-20 07:20:41.772658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:17.631 [2024-11-20 07:20:41.772713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.631 [2024-11-20 07:20:41.772734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:17.631 [2024-11-20 07:20:41.772752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.631 [2024-11-20 07:20:41.773193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.631 [2024-11-20 07:20:41.773237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:17.631 [2024-11-20 07:20:41.773313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:17.631 [2024-11-20 07:20:41.773345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:17.631 [2024-11-20 07:20:41.773503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:17.631 [2024-11-20 07:20:41.773524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:17.631 [2024-11-20 07:20:41.773852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:17.631 [2024-11-20 07:20:41.774053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:17.631 [2024-11-20 07:20:41.774069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:17.631 [2024-11-20 07:20:41.774229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.631 pt3 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.631 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.631 "name": "raid_bdev1", 00:23:17.631 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:17.631 "strip_size_kb": 64, 00:23:17.631 "state": "online", 00:23:17.631 "raid_level": "concat", 00:23:17.631 "superblock": true, 00:23:17.631 "num_base_bdevs": 3, 00:23:17.631 "num_base_bdevs_discovered": 3, 00:23:17.632 "num_base_bdevs_operational": 3, 00:23:17.632 "base_bdevs_list": [ 00:23:17.632 { 00:23:17.632 "name": "pt1", 00:23:17.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:17.632 "is_configured": true, 00:23:17.632 "data_offset": 2048, 00:23:17.632 "data_size": 63488 00:23:17.632 }, 00:23:17.632 { 00:23:17.632 "name": "pt2", 00:23:17.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.632 "is_configured": true, 00:23:17.632 "data_offset": 2048, 00:23:17.632 "data_size": 63488 00:23:17.632 }, 00:23:17.632 { 00:23:17.632 "name": "pt3", 00:23:17.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:17.632 "is_configured": true, 00:23:17.632 "data_offset": 2048, 00:23:17.632 "data_size": 63488 00:23:17.632 } 00:23:17.632 ] 00:23:17.632 }' 00:23:17.632 07:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.632 07:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.199 [2024-11-20 07:20:42.293187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.199 "name": "raid_bdev1", 00:23:18.199 "aliases": [ 00:23:18.199 "ea3a4bc7-0a55-412b-bf75-445c16bc7898" 00:23:18.199 ], 00:23:18.199 "product_name": "Raid Volume", 00:23:18.199 "block_size": 512, 00:23:18.199 "num_blocks": 190464, 00:23:18.199 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:18.199 "assigned_rate_limits": { 00:23:18.199 "rw_ios_per_sec": 0, 00:23:18.199 "rw_mbytes_per_sec": 0, 00:23:18.199 "r_mbytes_per_sec": 0, 00:23:18.199 "w_mbytes_per_sec": 0 00:23:18.199 }, 00:23:18.199 "claimed": false, 00:23:18.199 "zoned": false, 00:23:18.199 "supported_io_types": { 00:23:18.199 "read": true, 00:23:18.199 "write": true, 00:23:18.199 "unmap": true, 00:23:18.199 "flush": true, 00:23:18.199 "reset": true, 00:23:18.199 "nvme_admin": false, 00:23:18.199 "nvme_io": false, 00:23:18.199 "nvme_io_md": false, 00:23:18.199 "write_zeroes": true, 00:23:18.199 "zcopy": false, 00:23:18.199 "get_zone_info": false, 00:23:18.199 "zone_management": false, 00:23:18.199 "zone_append": false, 00:23:18.199 "compare": false, 00:23:18.199 "compare_and_write": false, 00:23:18.199 "abort": false, 00:23:18.199 "seek_hole": false, 00:23:18.199 "seek_data": false, 00:23:18.199 "copy": false, 00:23:18.199 "nvme_iov_md": false 00:23:18.199 }, 00:23:18.199 "memory_domains": [ 00:23:18.199 { 00:23:18.199 "dma_device_id": "system", 00:23:18.199 "dma_device_type": 1 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.199 "dma_device_type": 2 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "system", 00:23:18.199 "dma_device_type": 1 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.199 "dma_device_type": 2 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "system", 00:23:18.199 "dma_device_type": 1 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.199 "dma_device_type": 2 00:23:18.199 } 00:23:18.199 ], 00:23:18.199 "driver_specific": { 00:23:18.199 "raid": { 00:23:18.199 "uuid": "ea3a4bc7-0a55-412b-bf75-445c16bc7898", 00:23:18.199 "strip_size_kb": 64, 00:23:18.199 "state": "online", 00:23:18.199 "raid_level": "concat", 00:23:18.199 "superblock": true, 00:23:18.199 "num_base_bdevs": 3, 00:23:18.199 "num_base_bdevs_discovered": 3, 00:23:18.199 "num_base_bdevs_operational": 3, 00:23:18.199 "base_bdevs_list": [ 00:23:18.199 { 00:23:18.199 "name": "pt1", 00:23:18.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:18.199 "is_configured": true, 00:23:18.199 "data_offset": 2048, 00:23:18.199 "data_size": 63488 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "name": "pt2", 00:23:18.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.199 "is_configured": true, 00:23:18.199 "data_offset": 2048, 00:23:18.199 "data_size": 63488 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "name": "pt3", 00:23:18.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:18.199 "is_configured": true, 00:23:18.199 "data_offset": 2048, 00:23:18.199 "data_size": 63488 00:23:18.199 } 00:23:18.199 ] 00:23:18.199 } 00:23:18.199 } 00:23:18.199 }' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:18.199 pt2 00:23:18.199 pt3' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.199 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:18.458 [2024-11-20 07:20:42.589222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ea3a4bc7-0a55-412b-bf75-445c16bc7898 '!=' ea3a4bc7-0a55-412b-bf75-445c16bc7898 ']' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67111 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67111 ']' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67111 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67111 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.458 killing process with pid 67111 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67111' 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67111 00:23:18.458 07:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67111 00:23:18.458 [2024-11-20 07:20:42.675143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:18.458 [2024-11-20 07:20:42.675256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.458 [2024-11-20 07:20:42.675337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.458 [2024-11-20 07:20:42.675356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:18.716 [2024-11-20 07:20:42.942917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:20.092 07:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:20.092 00:23:20.092 real 0m5.686s 00:23:20.092 user 0m8.560s 00:23:20.092 sys 0m0.873s 00:23:20.092 07:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.092 07:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.092 ************************************ 00:23:20.092 END TEST raid_superblock_test 00:23:20.092 ************************************ 00:23:20.092 07:20:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:23:20.092 07:20:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:20.092 07:20:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.092 07:20:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.092 ************************************ 00:23:20.092 START TEST raid_read_error_test 00:23:20.092 ************************************ 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aE8JgSID1b 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67364 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67364 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67364 ']' 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.092 07:20:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.092 [2024-11-20 07:20:44.153738] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:20.092 [2024-11-20 07:20:44.153918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67364 ] 00:23:20.092 [2024-11-20 07:20:44.342733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.352 [2024-11-20 07:20:44.500298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.611 [2024-11-20 07:20:44.712559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.611 [2024-11-20 07:20:44.712643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.868 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.868 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:20.868 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:20.869 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:20.869 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.869 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 BaseBdev1_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 true 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 [2024-11-20 07:20:45.189127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:21.129 [2024-11-20 07:20:45.189199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.129 [2024-11-20 07:20:45.189229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:21.129 [2024-11-20 07:20:45.189248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.129 [2024-11-20 07:20:45.192104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.129 [2024-11-20 07:20:45.192155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:21.129 BaseBdev1 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 BaseBdev2_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 true 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 [2024-11-20 07:20:45.255733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:21.129 [2024-11-20 07:20:45.255952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.129 [2024-11-20 07:20:45.256023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:21.129 [2024-11-20 07:20:45.256047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.129 [2024-11-20 07:20:45.260273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.129 [2024-11-20 07:20:45.260431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:21.129 BaseBdev2 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 BaseBdev3_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 true 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 [2024-11-20 07:20:45.343617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:21.129 [2024-11-20 07:20:45.343812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.129 [2024-11-20 07:20:45.343870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:21.129 [2024-11-20 07:20:45.343893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.129 [2024-11-20 07:20:45.348103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.129 [2024-11-20 07:20:45.348261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:21.129 BaseBdev3 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 [2024-11-20 07:20:45.356874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.129 [2024-11-20 07:20:45.360420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:21.129 [2024-11-20 07:20:45.360667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.129 [2024-11-20 07:20:45.361200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:21.129 [2024-11-20 07:20:45.361237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:21.129 [2024-11-20 07:20:45.361796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:23:21.129 [2024-11-20 07:20:45.362133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:21.129 [2024-11-20 07:20:45.362171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:21.129 [2024-11-20 07:20:45.362502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.129 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.389 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.389 "name": "raid_bdev1", 00:23:21.389 "uuid": "6b96220d-fa5f-4dde-aa72-54a989fa1a67", 00:23:21.389 "strip_size_kb": 64, 00:23:21.389 "state": "online", 00:23:21.389 "raid_level": "concat", 00:23:21.389 "superblock": true, 00:23:21.389 "num_base_bdevs": 3, 00:23:21.389 "num_base_bdevs_discovered": 3, 00:23:21.389 "num_base_bdevs_operational": 3, 00:23:21.389 "base_bdevs_list": [ 00:23:21.389 { 00:23:21.389 "name": "BaseBdev1", 00:23:21.389 "uuid": "b77bdd3f-8a61-581c-ad9a-f4289a16ee0e", 00:23:21.389 "is_configured": true, 00:23:21.389 "data_offset": 2048, 00:23:21.389 "data_size": 63488 00:23:21.389 }, 00:23:21.389 { 00:23:21.389 "name": "BaseBdev2", 00:23:21.389 "uuid": "222fbcc6-12f9-51b1-bcb5-52436991cb21", 00:23:21.389 "is_configured": true, 00:23:21.389 "data_offset": 2048, 00:23:21.389 "data_size": 63488 00:23:21.389 }, 00:23:21.389 { 00:23:21.389 "name": "BaseBdev3", 00:23:21.389 "uuid": "ae0cacd3-6fa1-59a6-bd47-1a98a3bc727e", 00:23:21.389 "is_configured": true, 00:23:21.389 "data_offset": 2048, 00:23:21.389 "data_size": 63488 00:23:21.389 } 00:23:21.389 ] 00:23:21.389 }' 00:23:21.389 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.389 07:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.955 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:21.955 07:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:21.955 [2024-11-20 07:20:46.090555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.891 07:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.891 "name": "raid_bdev1", 00:23:22.891 "uuid": "6b96220d-fa5f-4dde-aa72-54a989fa1a67", 00:23:22.891 "strip_size_kb": 64, 00:23:22.891 "state": "online", 00:23:22.891 "raid_level": "concat", 00:23:22.891 "superblock": true, 00:23:22.891 "num_base_bdevs": 3, 00:23:22.891 "num_base_bdevs_discovered": 3, 00:23:22.891 "num_base_bdevs_operational": 3, 00:23:22.891 "base_bdevs_list": [ 00:23:22.891 { 00:23:22.891 "name": "BaseBdev1", 00:23:22.891 "uuid": "b77bdd3f-8a61-581c-ad9a-f4289a16ee0e", 00:23:22.891 "is_configured": true, 00:23:22.891 "data_offset": 2048, 00:23:22.891 "data_size": 63488 00:23:22.891 }, 00:23:22.891 { 00:23:22.891 "name": "BaseBdev2", 00:23:22.891 "uuid": "222fbcc6-12f9-51b1-bcb5-52436991cb21", 00:23:22.891 "is_configured": true, 00:23:22.891 "data_offset": 2048, 00:23:22.891 "data_size": 63488 00:23:22.891 }, 00:23:22.891 { 00:23:22.891 "name": "BaseBdev3", 00:23:22.891 "uuid": "ae0cacd3-6fa1-59a6-bd47-1a98a3bc727e", 00:23:22.891 "is_configured": true, 00:23:22.891 "data_offset": 2048, 00:23:22.891 "data_size": 63488 00:23:22.891 } 00:23:22.891 ] 00:23:22.891 }' 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.891 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.459 [2024-11-20 07:20:47.516791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.459 [2024-11-20 07:20:47.516832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.459 [2024-11-20 07:20:47.520188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.459 [2024-11-20 07:20:47.520255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.459 [2024-11-20 07:20:47.520320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.459 [2024-11-20 07:20:47.520337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:23.459 { 00:23:23.459 "results": [ 00:23:23.459 { 00:23:23.459 "job": "raid_bdev1", 00:23:23.459 "core_mask": "0x1", 00:23:23.459 "workload": "randrw", 00:23:23.459 "percentage": 50, 00:23:23.459 "status": "finished", 00:23:23.459 "queue_depth": 1, 00:23:23.459 "io_size": 131072, 00:23:23.459 "runtime": 1.423808, 00:23:23.459 "iops": 10844.861104868072, 00:23:23.459 "mibps": 1355.607638108509, 00:23:23.459 "io_failed": 1, 00:23:23.459 "io_timeout": 0, 00:23:23.459 "avg_latency_us": 128.74576868281312, 00:23:23.459 "min_latency_us": 43.054545454545455, 00:23:23.459 "max_latency_us": 1809.6872727272728 00:23:23.459 } 00:23:23.459 ], 00:23:23.459 "core_count": 1 00:23:23.459 } 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67364 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67364 ']' 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67364 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67364 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.459 killing process with pid 67364 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67364' 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67364 00:23:23.459 [2024-11-20 07:20:47.556649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.459 07:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67364 00:23:23.717 [2024-11-20 07:20:47.761513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aE8JgSID1b 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:24.651 07:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:23:24.651 00:23:24.652 real 0m4.832s 00:23:24.652 user 0m6.035s 00:23:24.652 sys 0m0.596s 00:23:24.652 07:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.652 07:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.652 ************************************ 00:23:24.652 END TEST raid_read_error_test 00:23:24.652 ************************************ 00:23:24.652 07:20:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:23:24.652 07:20:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:24.652 07:20:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.652 07:20:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:24.652 ************************************ 00:23:24.652 START TEST raid_write_error_test 00:23:24.652 ************************************ 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7nIZNeIwBg 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67515 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67515 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67515 ']' 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.652 07:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.910 [2024-11-20 07:20:49.006739] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:24.910 [2024-11-20 07:20:49.007347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67515 ] 00:23:24.910 [2024-11-20 07:20:49.183618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.167 [2024-11-20 07:20:49.329541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.424 [2024-11-20 07:20:49.534926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.424 [2024-11-20 07:20:49.534996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.014 BaseBdev1_malloc 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.014 true 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.014 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.014 [2024-11-20 07:20:50.130610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:26.014 [2024-11-20 07:20:50.130679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.014 [2024-11-20 07:20:50.130710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:26.014 [2024-11-20 07:20:50.130729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.014 [2024-11-20 07:20:50.133462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.014 [2024-11-20 07:20:50.133514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:26.014 BaseBdev1 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 BaseBdev2_malloc 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 true 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 [2024-11-20 07:20:50.190294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:26.015 [2024-11-20 07:20:50.190361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.015 [2024-11-20 07:20:50.190387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:26.015 [2024-11-20 07:20:50.190405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.015 [2024-11-20 07:20:50.193139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.015 [2024-11-20 07:20:50.193188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.015 BaseBdev2 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 BaseBdev3_malloc 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 true 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 [2024-11-20 07:20:50.265216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:26.015 [2024-11-20 07:20:50.265297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.015 [2024-11-20 07:20:50.265329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:26.015 [2024-11-20 07:20:50.265352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.015 [2024-11-20 07:20:50.268913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.015 [2024-11-20 07:20:50.268973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:26.015 BaseBdev3 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.015 [2024-11-20 07:20:50.273402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.015 [2024-11-20 07:20:50.276465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.015 [2024-11-20 07:20:50.276642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:26.015 [2024-11-20 07:20:50.277044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:26.015 [2024-11-20 07:20:50.277080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:26.015 [2024-11-20 07:20:50.277524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:23:26.015 [2024-11-20 07:20:50.277876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:26.015 [2024-11-20 07:20:50.277923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:26.015 [2024-11-20 07:20:50.278265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.015 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.312 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.312 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.312 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.312 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.312 "name": "raid_bdev1", 00:23:26.312 "uuid": "45cfae71-eefe-419e-bb18-7d8c21e2c3aa", 00:23:26.312 "strip_size_kb": 64, 00:23:26.312 "state": "online", 00:23:26.312 "raid_level": "concat", 00:23:26.312 "superblock": true, 00:23:26.312 "num_base_bdevs": 3, 00:23:26.312 "num_base_bdevs_discovered": 3, 00:23:26.312 "num_base_bdevs_operational": 3, 00:23:26.312 "base_bdevs_list": [ 00:23:26.312 { 00:23:26.312 "name": "BaseBdev1", 00:23:26.312 "uuid": "2ac0d22a-7a25-5e42-a9e3-1097b0b9bd79", 00:23:26.312 "is_configured": true, 00:23:26.312 "data_offset": 2048, 00:23:26.312 "data_size": 63488 00:23:26.312 }, 00:23:26.312 { 00:23:26.312 "name": "BaseBdev2", 00:23:26.312 "uuid": "2f0885f6-175f-5e4d-962e-462cdc9d89fe", 00:23:26.312 "is_configured": true, 00:23:26.312 "data_offset": 2048, 00:23:26.312 "data_size": 63488 00:23:26.312 }, 00:23:26.312 { 00:23:26.312 "name": "BaseBdev3", 00:23:26.312 "uuid": "502d86c0-fcc1-5a9a-a920-9342362ed158", 00:23:26.312 "is_configured": true, 00:23:26.312 "data_offset": 2048, 00:23:26.312 "data_size": 63488 00:23:26.312 } 00:23:26.312 ] 00:23:26.312 }' 00:23:26.312 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.312 07:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.570 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:26.570 07:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:26.829 [2024-11-20 07:20:50.919768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:23:27.765 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:27.765 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.766 "name": "raid_bdev1", 00:23:27.766 "uuid": "45cfae71-eefe-419e-bb18-7d8c21e2c3aa", 00:23:27.766 "strip_size_kb": 64, 00:23:27.766 "state": "online", 00:23:27.766 "raid_level": "concat", 00:23:27.766 "superblock": true, 00:23:27.766 "num_base_bdevs": 3, 00:23:27.766 "num_base_bdevs_discovered": 3, 00:23:27.766 "num_base_bdevs_operational": 3, 00:23:27.766 "base_bdevs_list": [ 00:23:27.766 { 00:23:27.766 "name": "BaseBdev1", 00:23:27.766 "uuid": "2ac0d22a-7a25-5e42-a9e3-1097b0b9bd79", 00:23:27.766 "is_configured": true, 00:23:27.766 "data_offset": 2048, 00:23:27.766 "data_size": 63488 00:23:27.766 }, 00:23:27.766 { 00:23:27.766 "name": "BaseBdev2", 00:23:27.766 "uuid": "2f0885f6-175f-5e4d-962e-462cdc9d89fe", 00:23:27.766 "is_configured": true, 00:23:27.766 "data_offset": 2048, 00:23:27.766 "data_size": 63488 00:23:27.766 }, 00:23:27.766 { 00:23:27.766 "name": "BaseBdev3", 00:23:27.766 "uuid": "502d86c0-fcc1-5a9a-a920-9342362ed158", 00:23:27.766 "is_configured": true, 00:23:27.766 "data_offset": 2048, 00:23:27.766 "data_size": 63488 00:23:27.766 } 00:23:27.766 ] 00:23:27.766 }' 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.766 07:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 [2024-11-20 07:20:52.343448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.334 [2024-11-20 07:20:52.343488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:28.334 [2024-11-20 07:20:52.346833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:28.334 [2024-11-20 07:20:52.346896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.334 [2024-11-20 07:20:52.346954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:28.334 [2024-11-20 07:20:52.346973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:28.334 { 00:23:28.334 "results": [ 00:23:28.334 { 00:23:28.334 "job": "raid_bdev1", 00:23:28.334 "core_mask": "0x1", 00:23:28.334 "workload": "randrw", 00:23:28.334 "percentage": 50, 00:23:28.334 "status": "finished", 00:23:28.334 "queue_depth": 1, 00:23:28.334 "io_size": 131072, 00:23:28.334 "runtime": 1.421251, 00:23:28.334 "iops": 10674.398821882975, 00:23:28.334 "mibps": 1334.2998527353718, 00:23:28.334 "io_failed": 1, 00:23:28.334 "io_timeout": 0, 00:23:28.334 "avg_latency_us": 130.83703472904634, 00:23:28.334 "min_latency_us": 39.56363636363636, 00:23:28.334 "max_latency_us": 1839.4763636363637 00:23:28.334 } 00:23:28.334 ], 00:23:28.334 "core_count": 1 00:23:28.334 } 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67515 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67515 ']' 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67515 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67515 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.334 killing process with pid 67515 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67515' 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67515 00:23:28.334 [2024-11-20 07:20:52.386963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:28.334 07:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67515 00:23:28.334 [2024-11-20 07:20:52.596116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7nIZNeIwBg 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:23:29.704 00:23:29.704 real 0m4.767s 00:23:29.704 user 0m5.980s 00:23:29.704 sys 0m0.582s 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.704 07:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.704 ************************************ 00:23:29.704 END TEST raid_write_error_test 00:23:29.704 ************************************ 00:23:29.704 07:20:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:23:29.704 07:20:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:23:29.704 07:20:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:29.704 07:20:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.704 07:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:29.704 ************************************ 00:23:29.704 START TEST raid_state_function_test 00:23:29.704 ************************************ 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67659 00:23:29.704 Process raid pid: 67659 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67659' 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67659 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67659 ']' 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.704 07:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.704 [2024-11-20 07:20:53.819310] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:29.704 [2024-11-20 07:20:53.819475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.962 [2024-11-20 07:20:53.999856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.962 [2024-11-20 07:20:54.130098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.220 [2024-11-20 07:20:54.336360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.220 [2024-11-20 07:20:54.336418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.783 [2024-11-20 07:20:54.854853] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:30.783 [2024-11-20 07:20:54.854917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:30.783 [2024-11-20 07:20:54.854934] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:30.783 [2024-11-20 07:20:54.854950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:30.783 [2024-11-20 07:20:54.854960] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:30.783 [2024-11-20 07:20:54.854975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.783 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.784 "name": "Existed_Raid", 00:23:30.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.784 "strip_size_kb": 0, 00:23:30.784 "state": "configuring", 00:23:30.784 "raid_level": "raid1", 00:23:30.784 "superblock": false, 00:23:30.784 "num_base_bdevs": 3, 00:23:30.784 "num_base_bdevs_discovered": 0, 00:23:30.784 "num_base_bdevs_operational": 3, 00:23:30.784 "base_bdevs_list": [ 00:23:30.784 { 00:23:30.784 "name": "BaseBdev1", 00:23:30.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.784 "is_configured": false, 00:23:30.784 "data_offset": 0, 00:23:30.784 "data_size": 0 00:23:30.784 }, 00:23:30.784 { 00:23:30.784 "name": "BaseBdev2", 00:23:30.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.784 "is_configured": false, 00:23:30.784 "data_offset": 0, 00:23:30.784 "data_size": 0 00:23:30.784 }, 00:23:30.784 { 00:23:30.784 "name": "BaseBdev3", 00:23:30.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.784 "is_configured": false, 00:23:30.784 "data_offset": 0, 00:23:30.784 "data_size": 0 00:23:30.784 } 00:23:30.784 ] 00:23:30.784 }' 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.784 07:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 [2024-11-20 07:20:55.374921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.349 [2024-11-20 07:20:55.374981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 [2024-11-20 07:20:55.382894] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:31.349 [2024-11-20 07:20:55.382948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:31.349 [2024-11-20 07:20:55.382963] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.349 [2024-11-20 07:20:55.382979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.349 [2024-11-20 07:20:55.382988] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:31.349 [2024-11-20 07:20:55.383003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 [2024-11-20 07:20:55.431559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.349 BaseBdev1 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.349 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 [ 00:23:31.349 { 00:23:31.349 "name": "BaseBdev1", 00:23:31.349 "aliases": [ 00:23:31.349 "8df13a8c-6544-4fb8-8a9c-fe5a307611b8" 00:23:31.349 ], 00:23:31.349 "product_name": "Malloc disk", 00:23:31.349 "block_size": 512, 00:23:31.349 "num_blocks": 65536, 00:23:31.349 "uuid": "8df13a8c-6544-4fb8-8a9c-fe5a307611b8", 00:23:31.349 "assigned_rate_limits": { 00:23:31.349 "rw_ios_per_sec": 0, 00:23:31.349 "rw_mbytes_per_sec": 0, 00:23:31.349 "r_mbytes_per_sec": 0, 00:23:31.349 "w_mbytes_per_sec": 0 00:23:31.349 }, 00:23:31.349 "claimed": true, 00:23:31.349 "claim_type": "exclusive_write", 00:23:31.349 "zoned": false, 00:23:31.349 "supported_io_types": { 00:23:31.349 "read": true, 00:23:31.349 "write": true, 00:23:31.349 "unmap": true, 00:23:31.349 "flush": true, 00:23:31.349 "reset": true, 00:23:31.349 "nvme_admin": false, 00:23:31.350 "nvme_io": false, 00:23:31.350 "nvme_io_md": false, 00:23:31.350 "write_zeroes": true, 00:23:31.350 "zcopy": true, 00:23:31.350 "get_zone_info": false, 00:23:31.350 "zone_management": false, 00:23:31.350 "zone_append": false, 00:23:31.350 "compare": false, 00:23:31.350 "compare_and_write": false, 00:23:31.350 "abort": true, 00:23:31.350 "seek_hole": false, 00:23:31.350 "seek_data": false, 00:23:31.350 "copy": true, 00:23:31.350 "nvme_iov_md": false 00:23:31.350 }, 00:23:31.350 "memory_domains": [ 00:23:31.350 { 00:23:31.350 "dma_device_id": "system", 00:23:31.350 "dma_device_type": 1 00:23:31.350 }, 00:23:31.350 { 00:23:31.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.350 "dma_device_type": 2 00:23:31.350 } 00:23:31.350 ], 00:23:31.350 "driver_specific": {} 00:23:31.350 } 00:23:31.350 ] 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.350 "name": "Existed_Raid", 00:23:31.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.350 "strip_size_kb": 0, 00:23:31.350 "state": "configuring", 00:23:31.350 "raid_level": "raid1", 00:23:31.350 "superblock": false, 00:23:31.350 "num_base_bdevs": 3, 00:23:31.350 "num_base_bdevs_discovered": 1, 00:23:31.350 "num_base_bdevs_operational": 3, 00:23:31.350 "base_bdevs_list": [ 00:23:31.350 { 00:23:31.350 "name": "BaseBdev1", 00:23:31.350 "uuid": "8df13a8c-6544-4fb8-8a9c-fe5a307611b8", 00:23:31.350 "is_configured": true, 00:23:31.350 "data_offset": 0, 00:23:31.350 "data_size": 65536 00:23:31.350 }, 00:23:31.350 { 00:23:31.350 "name": "BaseBdev2", 00:23:31.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.350 "is_configured": false, 00:23:31.350 "data_offset": 0, 00:23:31.350 "data_size": 0 00:23:31.350 }, 00:23:31.350 { 00:23:31.350 "name": "BaseBdev3", 00:23:31.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.350 "is_configured": false, 00:23:31.350 "data_offset": 0, 00:23:31.350 "data_size": 0 00:23:31.350 } 00:23:31.350 ] 00:23:31.350 }' 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.350 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 [2024-11-20 07:20:55.971769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.917 [2024-11-20 07:20:55.971835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 [2024-11-20 07:20:55.979795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.917 [2024-11-20 07:20:55.982223] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.917 [2024-11-20 07:20:55.982283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.917 [2024-11-20 07:20:55.982300] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:31.917 [2024-11-20 07:20:55.982316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.917 07:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.917 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.917 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.917 "name": "Existed_Raid", 00:23:31.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.917 "strip_size_kb": 0, 00:23:31.917 "state": "configuring", 00:23:31.917 "raid_level": "raid1", 00:23:31.917 "superblock": false, 00:23:31.917 "num_base_bdevs": 3, 00:23:31.917 "num_base_bdevs_discovered": 1, 00:23:31.917 "num_base_bdevs_operational": 3, 00:23:31.917 "base_bdevs_list": [ 00:23:31.917 { 00:23:31.917 "name": "BaseBdev1", 00:23:31.917 "uuid": "8df13a8c-6544-4fb8-8a9c-fe5a307611b8", 00:23:31.917 "is_configured": true, 00:23:31.917 "data_offset": 0, 00:23:31.917 "data_size": 65536 00:23:31.917 }, 00:23:31.917 { 00:23:31.917 "name": "BaseBdev2", 00:23:31.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.917 "is_configured": false, 00:23:31.917 "data_offset": 0, 00:23:31.917 "data_size": 0 00:23:31.917 }, 00:23:31.917 { 00:23:31.917 "name": "BaseBdev3", 00:23:31.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.917 "is_configured": false, 00:23:31.917 "data_offset": 0, 00:23:31.917 "data_size": 0 00:23:31.917 } 00:23:31.917 ] 00:23:31.917 }' 00:23:31.917 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.917 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.484 [2024-11-20 07:20:56.538264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:32.484 BaseBdev2 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.484 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.485 [ 00:23:32.485 { 00:23:32.485 "name": "BaseBdev2", 00:23:32.485 "aliases": [ 00:23:32.485 "951b92b5-c2cc-470e-b062-81d1d780eb89" 00:23:32.485 ], 00:23:32.485 "product_name": "Malloc disk", 00:23:32.485 "block_size": 512, 00:23:32.485 "num_blocks": 65536, 00:23:32.485 "uuid": "951b92b5-c2cc-470e-b062-81d1d780eb89", 00:23:32.485 "assigned_rate_limits": { 00:23:32.485 "rw_ios_per_sec": 0, 00:23:32.485 "rw_mbytes_per_sec": 0, 00:23:32.485 "r_mbytes_per_sec": 0, 00:23:32.485 "w_mbytes_per_sec": 0 00:23:32.485 }, 00:23:32.485 "claimed": true, 00:23:32.485 "claim_type": "exclusive_write", 00:23:32.485 "zoned": false, 00:23:32.485 "supported_io_types": { 00:23:32.485 "read": true, 00:23:32.485 "write": true, 00:23:32.485 "unmap": true, 00:23:32.485 "flush": true, 00:23:32.485 "reset": true, 00:23:32.485 "nvme_admin": false, 00:23:32.485 "nvme_io": false, 00:23:32.485 "nvme_io_md": false, 00:23:32.485 "write_zeroes": true, 00:23:32.485 "zcopy": true, 00:23:32.485 "get_zone_info": false, 00:23:32.485 "zone_management": false, 00:23:32.485 "zone_append": false, 00:23:32.485 "compare": false, 00:23:32.485 "compare_and_write": false, 00:23:32.485 "abort": true, 00:23:32.485 "seek_hole": false, 00:23:32.485 "seek_data": false, 00:23:32.485 "copy": true, 00:23:32.485 "nvme_iov_md": false 00:23:32.485 }, 00:23:32.485 "memory_domains": [ 00:23:32.485 { 00:23:32.485 "dma_device_id": "system", 00:23:32.485 "dma_device_type": 1 00:23:32.485 }, 00:23:32.485 { 00:23:32.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.485 "dma_device_type": 2 00:23:32.485 } 00:23:32.485 ], 00:23:32.485 "driver_specific": {} 00:23:32.485 } 00:23:32.485 ] 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.485 "name": "Existed_Raid", 00:23:32.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.485 "strip_size_kb": 0, 00:23:32.485 "state": "configuring", 00:23:32.485 "raid_level": "raid1", 00:23:32.485 "superblock": false, 00:23:32.485 "num_base_bdevs": 3, 00:23:32.485 "num_base_bdevs_discovered": 2, 00:23:32.485 "num_base_bdevs_operational": 3, 00:23:32.485 "base_bdevs_list": [ 00:23:32.485 { 00:23:32.485 "name": "BaseBdev1", 00:23:32.485 "uuid": "8df13a8c-6544-4fb8-8a9c-fe5a307611b8", 00:23:32.485 "is_configured": true, 00:23:32.485 "data_offset": 0, 00:23:32.485 "data_size": 65536 00:23:32.485 }, 00:23:32.485 { 00:23:32.485 "name": "BaseBdev2", 00:23:32.485 "uuid": "951b92b5-c2cc-470e-b062-81d1d780eb89", 00:23:32.485 "is_configured": true, 00:23:32.485 "data_offset": 0, 00:23:32.485 "data_size": 65536 00:23:32.485 }, 00:23:32.485 { 00:23:32.485 "name": "BaseBdev3", 00:23:32.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.485 "is_configured": false, 00:23:32.485 "data_offset": 0, 00:23:32.485 "data_size": 0 00:23:32.485 } 00:23:32.485 ] 00:23:32.485 }' 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.485 07:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 [2024-11-20 07:20:57.108359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:33.052 [2024-11-20 07:20:57.108430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:33.052 [2024-11-20 07:20:57.108451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:33.052 [2024-11-20 07:20:57.108841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:33.052 [2024-11-20 07:20:57.109072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:33.052 [2024-11-20 07:20:57.109097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:33.052 [2024-11-20 07:20:57.109426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.052 BaseBdev3 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 [ 00:23:33.052 { 00:23:33.052 "name": "BaseBdev3", 00:23:33.052 "aliases": [ 00:23:33.052 "e13f97a4-3add-4808-bee0-a23a4feaf947" 00:23:33.052 ], 00:23:33.052 "product_name": "Malloc disk", 00:23:33.052 "block_size": 512, 00:23:33.052 "num_blocks": 65536, 00:23:33.052 "uuid": "e13f97a4-3add-4808-bee0-a23a4feaf947", 00:23:33.052 "assigned_rate_limits": { 00:23:33.052 "rw_ios_per_sec": 0, 00:23:33.052 "rw_mbytes_per_sec": 0, 00:23:33.052 "r_mbytes_per_sec": 0, 00:23:33.052 "w_mbytes_per_sec": 0 00:23:33.052 }, 00:23:33.052 "claimed": true, 00:23:33.052 "claim_type": "exclusive_write", 00:23:33.052 "zoned": false, 00:23:33.052 "supported_io_types": { 00:23:33.052 "read": true, 00:23:33.052 "write": true, 00:23:33.052 "unmap": true, 00:23:33.052 "flush": true, 00:23:33.052 "reset": true, 00:23:33.052 "nvme_admin": false, 00:23:33.052 "nvme_io": false, 00:23:33.052 "nvme_io_md": false, 00:23:33.052 "write_zeroes": true, 00:23:33.052 "zcopy": true, 00:23:33.052 "get_zone_info": false, 00:23:33.052 "zone_management": false, 00:23:33.052 "zone_append": false, 00:23:33.052 "compare": false, 00:23:33.052 "compare_and_write": false, 00:23:33.052 "abort": true, 00:23:33.052 "seek_hole": false, 00:23:33.052 "seek_data": false, 00:23:33.052 "copy": true, 00:23:33.052 "nvme_iov_md": false 00:23:33.052 }, 00:23:33.052 "memory_domains": [ 00:23:33.052 { 00:23:33.052 "dma_device_id": "system", 00:23:33.052 "dma_device_type": 1 00:23:33.052 }, 00:23:33.052 { 00:23:33.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.052 "dma_device_type": 2 00:23:33.052 } 00:23:33.052 ], 00:23:33.052 "driver_specific": {} 00:23:33.052 } 00:23:33.052 ] 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.052 "name": "Existed_Raid", 00:23:33.052 "uuid": "0bc28f3b-0db2-45e7-a6b4-a8176abe3538", 00:23:33.052 "strip_size_kb": 0, 00:23:33.052 "state": "online", 00:23:33.052 "raid_level": "raid1", 00:23:33.052 "superblock": false, 00:23:33.052 "num_base_bdevs": 3, 00:23:33.052 "num_base_bdevs_discovered": 3, 00:23:33.052 "num_base_bdevs_operational": 3, 00:23:33.052 "base_bdevs_list": [ 00:23:33.052 { 00:23:33.052 "name": "BaseBdev1", 00:23:33.052 "uuid": "8df13a8c-6544-4fb8-8a9c-fe5a307611b8", 00:23:33.052 "is_configured": true, 00:23:33.052 "data_offset": 0, 00:23:33.052 "data_size": 65536 00:23:33.052 }, 00:23:33.052 { 00:23:33.052 "name": "BaseBdev2", 00:23:33.052 "uuid": "951b92b5-c2cc-470e-b062-81d1d780eb89", 00:23:33.052 "is_configured": true, 00:23:33.052 "data_offset": 0, 00:23:33.052 "data_size": 65536 00:23:33.052 }, 00:23:33.052 { 00:23:33.052 "name": "BaseBdev3", 00:23:33.052 "uuid": "e13f97a4-3add-4808-bee0-a23a4feaf947", 00:23:33.052 "is_configured": true, 00:23:33.052 "data_offset": 0, 00:23:33.052 "data_size": 65536 00:23:33.052 } 00:23:33.052 ] 00:23:33.052 }' 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.052 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.618 [2024-11-20 07:20:57.624958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.618 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.618 "name": "Existed_Raid", 00:23:33.618 "aliases": [ 00:23:33.618 "0bc28f3b-0db2-45e7-a6b4-a8176abe3538" 00:23:33.618 ], 00:23:33.618 "product_name": "Raid Volume", 00:23:33.618 "block_size": 512, 00:23:33.618 "num_blocks": 65536, 00:23:33.618 "uuid": "0bc28f3b-0db2-45e7-a6b4-a8176abe3538", 00:23:33.618 "assigned_rate_limits": { 00:23:33.618 "rw_ios_per_sec": 0, 00:23:33.618 "rw_mbytes_per_sec": 0, 00:23:33.618 "r_mbytes_per_sec": 0, 00:23:33.618 "w_mbytes_per_sec": 0 00:23:33.618 }, 00:23:33.618 "claimed": false, 00:23:33.618 "zoned": false, 00:23:33.618 "supported_io_types": { 00:23:33.618 "read": true, 00:23:33.618 "write": true, 00:23:33.618 "unmap": false, 00:23:33.618 "flush": false, 00:23:33.618 "reset": true, 00:23:33.618 "nvme_admin": false, 00:23:33.618 "nvme_io": false, 00:23:33.618 "nvme_io_md": false, 00:23:33.618 "write_zeroes": true, 00:23:33.618 "zcopy": false, 00:23:33.618 "get_zone_info": false, 00:23:33.618 "zone_management": false, 00:23:33.618 "zone_append": false, 00:23:33.618 "compare": false, 00:23:33.618 "compare_and_write": false, 00:23:33.618 "abort": false, 00:23:33.618 "seek_hole": false, 00:23:33.618 "seek_data": false, 00:23:33.618 "copy": false, 00:23:33.618 "nvme_iov_md": false 00:23:33.618 }, 00:23:33.618 "memory_domains": [ 00:23:33.618 { 00:23:33.618 "dma_device_id": "system", 00:23:33.618 "dma_device_type": 1 00:23:33.618 }, 00:23:33.618 { 00:23:33.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.618 "dma_device_type": 2 00:23:33.618 }, 00:23:33.618 { 00:23:33.618 "dma_device_id": "system", 00:23:33.619 "dma_device_type": 1 00:23:33.619 }, 00:23:33.619 { 00:23:33.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.619 "dma_device_type": 2 00:23:33.619 }, 00:23:33.619 { 00:23:33.619 "dma_device_id": "system", 00:23:33.619 "dma_device_type": 1 00:23:33.619 }, 00:23:33.619 { 00:23:33.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.619 "dma_device_type": 2 00:23:33.619 } 00:23:33.619 ], 00:23:33.619 "driver_specific": { 00:23:33.619 "raid": { 00:23:33.619 "uuid": "0bc28f3b-0db2-45e7-a6b4-a8176abe3538", 00:23:33.619 "strip_size_kb": 0, 00:23:33.619 "state": "online", 00:23:33.619 "raid_level": "raid1", 00:23:33.619 "superblock": false, 00:23:33.619 "num_base_bdevs": 3, 00:23:33.619 "num_base_bdevs_discovered": 3, 00:23:33.619 "num_base_bdevs_operational": 3, 00:23:33.619 "base_bdevs_list": [ 00:23:33.619 { 00:23:33.619 "name": "BaseBdev1", 00:23:33.619 "uuid": "8df13a8c-6544-4fb8-8a9c-fe5a307611b8", 00:23:33.619 "is_configured": true, 00:23:33.619 "data_offset": 0, 00:23:33.619 "data_size": 65536 00:23:33.619 }, 00:23:33.619 { 00:23:33.619 "name": "BaseBdev2", 00:23:33.619 "uuid": "951b92b5-c2cc-470e-b062-81d1d780eb89", 00:23:33.619 "is_configured": true, 00:23:33.619 "data_offset": 0, 00:23:33.619 "data_size": 65536 00:23:33.619 }, 00:23:33.619 { 00:23:33.619 "name": "BaseBdev3", 00:23:33.619 "uuid": "e13f97a4-3add-4808-bee0-a23a4feaf947", 00:23:33.619 "is_configured": true, 00:23:33.619 "data_offset": 0, 00:23:33.619 "data_size": 65536 00:23:33.619 } 00:23:33.619 ] 00:23:33.619 } 00:23:33.619 } 00:23:33.619 }' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:33.619 BaseBdev2 00:23:33.619 BaseBdev3' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.619 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.876 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:33.876 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:33.877 07:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:33.877 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.877 07:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.877 [2024-11-20 07:20:57.936710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.877 "name": "Existed_Raid", 00:23:33.877 "uuid": "0bc28f3b-0db2-45e7-a6b4-a8176abe3538", 00:23:33.877 "strip_size_kb": 0, 00:23:33.877 "state": "online", 00:23:33.877 "raid_level": "raid1", 00:23:33.877 "superblock": false, 00:23:33.877 "num_base_bdevs": 3, 00:23:33.877 "num_base_bdevs_discovered": 2, 00:23:33.877 "num_base_bdevs_operational": 2, 00:23:33.877 "base_bdevs_list": [ 00:23:33.877 { 00:23:33.877 "name": null, 00:23:33.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.877 "is_configured": false, 00:23:33.877 "data_offset": 0, 00:23:33.877 "data_size": 65536 00:23:33.877 }, 00:23:33.877 { 00:23:33.877 "name": "BaseBdev2", 00:23:33.877 "uuid": "951b92b5-c2cc-470e-b062-81d1d780eb89", 00:23:33.877 "is_configured": true, 00:23:33.877 "data_offset": 0, 00:23:33.877 "data_size": 65536 00:23:33.877 }, 00:23:33.877 { 00:23:33.877 "name": "BaseBdev3", 00:23:33.877 "uuid": "e13f97a4-3add-4808-bee0-a23a4feaf947", 00:23:33.877 "is_configured": true, 00:23:33.877 "data_offset": 0, 00:23:33.877 "data_size": 65536 00:23:33.877 } 00:23:33.877 ] 00:23:33.877 }' 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.877 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.442 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:34.442 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:34.442 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 [2024-11-20 07:20:58.569387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.443 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 [2024-11-20 07:20:58.723889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:34.443 [2024-11-20 07:20:58.724023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:34.702 [2024-11-20 07:20:58.810223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.702 [2024-11-20 07:20:58.810299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:34.702 [2024-11-20 07:20:58.810320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 BaseBdev2 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 [ 00:23:34.702 { 00:23:34.702 "name": "BaseBdev2", 00:23:34.702 "aliases": [ 00:23:34.702 "3f502b35-2319-47a4-857f-a928517a9770" 00:23:34.702 ], 00:23:34.702 "product_name": "Malloc disk", 00:23:34.702 "block_size": 512, 00:23:34.702 "num_blocks": 65536, 00:23:34.702 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:34.702 "assigned_rate_limits": { 00:23:34.702 "rw_ios_per_sec": 0, 00:23:34.702 "rw_mbytes_per_sec": 0, 00:23:34.702 "r_mbytes_per_sec": 0, 00:23:34.702 "w_mbytes_per_sec": 0 00:23:34.702 }, 00:23:34.702 "claimed": false, 00:23:34.702 "zoned": false, 00:23:34.702 "supported_io_types": { 00:23:34.702 "read": true, 00:23:34.702 "write": true, 00:23:34.702 "unmap": true, 00:23:34.702 "flush": true, 00:23:34.702 "reset": true, 00:23:34.702 "nvme_admin": false, 00:23:34.702 "nvme_io": false, 00:23:34.702 "nvme_io_md": false, 00:23:34.702 "write_zeroes": true, 00:23:34.702 "zcopy": true, 00:23:34.702 "get_zone_info": false, 00:23:34.702 "zone_management": false, 00:23:34.702 "zone_append": false, 00:23:34.702 "compare": false, 00:23:34.702 "compare_and_write": false, 00:23:34.702 "abort": true, 00:23:34.702 "seek_hole": false, 00:23:34.702 "seek_data": false, 00:23:34.702 "copy": true, 00:23:34.702 "nvme_iov_md": false 00:23:34.702 }, 00:23:34.702 "memory_domains": [ 00:23:34.702 { 00:23:34.702 "dma_device_id": "system", 00:23:34.702 "dma_device_type": 1 00:23:34.702 }, 00:23:34.702 { 00:23:34.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.702 "dma_device_type": 2 00:23:34.702 } 00:23:34.702 ], 00:23:34.702 "driver_specific": {} 00:23:34.702 } 00:23:34.702 ] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 BaseBdev3 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.702 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.961 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.961 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:34.961 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.961 07:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.961 [ 00:23:34.961 { 00:23:34.961 "name": "BaseBdev3", 00:23:34.961 "aliases": [ 00:23:34.961 "992e87db-1ef6-405b-822b-3bc28709f878" 00:23:34.961 ], 00:23:34.961 "product_name": "Malloc disk", 00:23:34.961 "block_size": 512, 00:23:34.961 "num_blocks": 65536, 00:23:34.961 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:34.961 "assigned_rate_limits": { 00:23:34.961 "rw_ios_per_sec": 0, 00:23:34.961 "rw_mbytes_per_sec": 0, 00:23:34.961 "r_mbytes_per_sec": 0, 00:23:34.961 "w_mbytes_per_sec": 0 00:23:34.961 }, 00:23:34.961 "claimed": false, 00:23:34.961 "zoned": false, 00:23:34.961 "supported_io_types": { 00:23:34.961 "read": true, 00:23:34.961 "write": true, 00:23:34.961 "unmap": true, 00:23:34.961 "flush": true, 00:23:34.961 "reset": true, 00:23:34.961 "nvme_admin": false, 00:23:34.961 "nvme_io": false, 00:23:34.961 "nvme_io_md": false, 00:23:34.961 "write_zeroes": true, 00:23:34.961 "zcopy": true, 00:23:34.961 "get_zone_info": false, 00:23:34.961 "zone_management": false, 00:23:34.961 "zone_append": false, 00:23:34.961 "compare": false, 00:23:34.961 "compare_and_write": false, 00:23:34.961 "abort": true, 00:23:34.961 "seek_hole": false, 00:23:34.961 "seek_data": false, 00:23:34.961 "copy": true, 00:23:34.961 "nvme_iov_md": false 00:23:34.961 }, 00:23:34.961 "memory_domains": [ 00:23:34.961 { 00:23:34.961 "dma_device_id": "system", 00:23:34.961 "dma_device_type": 1 00:23:34.961 }, 00:23:34.961 { 00:23:34.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.961 "dma_device_type": 2 00:23:34.961 } 00:23:34.961 ], 00:23:34.961 "driver_specific": {} 00:23:34.961 } 00:23:34.961 ] 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.961 [2024-11-20 07:20:59.023541] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:34.961 [2024-11-20 07:20:59.023754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:34.961 [2024-11-20 07:20:59.023910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.961 [2024-11-20 07:20:59.026389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.961 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.961 "name": "Existed_Raid", 00:23:34.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.961 "strip_size_kb": 0, 00:23:34.961 "state": "configuring", 00:23:34.961 "raid_level": "raid1", 00:23:34.961 "superblock": false, 00:23:34.961 "num_base_bdevs": 3, 00:23:34.961 "num_base_bdevs_discovered": 2, 00:23:34.961 "num_base_bdevs_operational": 3, 00:23:34.961 "base_bdevs_list": [ 00:23:34.961 { 00:23:34.961 "name": "BaseBdev1", 00:23:34.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.961 "is_configured": false, 00:23:34.961 "data_offset": 0, 00:23:34.961 "data_size": 0 00:23:34.961 }, 00:23:34.961 { 00:23:34.961 "name": "BaseBdev2", 00:23:34.961 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:34.962 "is_configured": true, 00:23:34.962 "data_offset": 0, 00:23:34.962 "data_size": 65536 00:23:34.962 }, 00:23:34.962 { 00:23:34.962 "name": "BaseBdev3", 00:23:34.962 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:34.962 "is_configured": true, 00:23:34.962 "data_offset": 0, 00:23:34.962 "data_size": 65536 00:23:34.962 } 00:23:34.962 ] 00:23:34.962 }' 00:23:34.962 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.962 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.526 [2024-11-20 07:20:59.527698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.526 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.526 "name": "Existed_Raid", 00:23:35.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.526 "strip_size_kb": 0, 00:23:35.526 "state": "configuring", 00:23:35.526 "raid_level": "raid1", 00:23:35.526 "superblock": false, 00:23:35.526 "num_base_bdevs": 3, 00:23:35.526 "num_base_bdevs_discovered": 1, 00:23:35.526 "num_base_bdevs_operational": 3, 00:23:35.526 "base_bdevs_list": [ 00:23:35.526 { 00:23:35.526 "name": "BaseBdev1", 00:23:35.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.526 "is_configured": false, 00:23:35.526 "data_offset": 0, 00:23:35.526 "data_size": 0 00:23:35.526 }, 00:23:35.526 { 00:23:35.526 "name": null, 00:23:35.527 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:35.527 "is_configured": false, 00:23:35.527 "data_offset": 0, 00:23:35.527 "data_size": 65536 00:23:35.527 }, 00:23:35.527 { 00:23:35.527 "name": "BaseBdev3", 00:23:35.527 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:35.527 "is_configured": true, 00:23:35.527 "data_offset": 0, 00:23:35.527 "data_size": 65536 00:23:35.527 } 00:23:35.527 ] 00:23:35.527 }' 00:23:35.527 07:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.527 07:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.788 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.788 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:35.788 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.788 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.788 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.060 [2024-11-20 07:21:00.146106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:36.060 BaseBdev1 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.060 [ 00:23:36.060 { 00:23:36.060 "name": "BaseBdev1", 00:23:36.060 "aliases": [ 00:23:36.060 "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193" 00:23:36.060 ], 00:23:36.060 "product_name": "Malloc disk", 00:23:36.060 "block_size": 512, 00:23:36.060 "num_blocks": 65536, 00:23:36.060 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:36.060 "assigned_rate_limits": { 00:23:36.060 "rw_ios_per_sec": 0, 00:23:36.060 "rw_mbytes_per_sec": 0, 00:23:36.060 "r_mbytes_per_sec": 0, 00:23:36.060 "w_mbytes_per_sec": 0 00:23:36.060 }, 00:23:36.060 "claimed": true, 00:23:36.060 "claim_type": "exclusive_write", 00:23:36.060 "zoned": false, 00:23:36.060 "supported_io_types": { 00:23:36.060 "read": true, 00:23:36.060 "write": true, 00:23:36.060 "unmap": true, 00:23:36.060 "flush": true, 00:23:36.060 "reset": true, 00:23:36.060 "nvme_admin": false, 00:23:36.060 "nvme_io": false, 00:23:36.060 "nvme_io_md": false, 00:23:36.060 "write_zeroes": true, 00:23:36.060 "zcopy": true, 00:23:36.060 "get_zone_info": false, 00:23:36.060 "zone_management": false, 00:23:36.060 "zone_append": false, 00:23:36.060 "compare": false, 00:23:36.060 "compare_and_write": false, 00:23:36.060 "abort": true, 00:23:36.060 "seek_hole": false, 00:23:36.060 "seek_data": false, 00:23:36.060 "copy": true, 00:23:36.060 "nvme_iov_md": false 00:23:36.060 }, 00:23:36.060 "memory_domains": [ 00:23:36.060 { 00:23:36.060 "dma_device_id": "system", 00:23:36.060 "dma_device_type": 1 00:23:36.060 }, 00:23:36.060 { 00:23:36.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.060 "dma_device_type": 2 00:23:36.060 } 00:23:36.060 ], 00:23:36.060 "driver_specific": {} 00:23:36.060 } 00:23:36.060 ] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.060 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.060 "name": "Existed_Raid", 00:23:36.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.061 "strip_size_kb": 0, 00:23:36.061 "state": "configuring", 00:23:36.061 "raid_level": "raid1", 00:23:36.061 "superblock": false, 00:23:36.061 "num_base_bdevs": 3, 00:23:36.061 "num_base_bdevs_discovered": 2, 00:23:36.061 "num_base_bdevs_operational": 3, 00:23:36.061 "base_bdevs_list": [ 00:23:36.061 { 00:23:36.061 "name": "BaseBdev1", 00:23:36.061 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:36.061 "is_configured": true, 00:23:36.061 "data_offset": 0, 00:23:36.061 "data_size": 65536 00:23:36.061 }, 00:23:36.061 { 00:23:36.061 "name": null, 00:23:36.061 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:36.061 "is_configured": false, 00:23:36.061 "data_offset": 0, 00:23:36.061 "data_size": 65536 00:23:36.061 }, 00:23:36.061 { 00:23:36.061 "name": "BaseBdev3", 00:23:36.061 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:36.061 "is_configured": true, 00:23:36.061 "data_offset": 0, 00:23:36.061 "data_size": 65536 00:23:36.061 } 00:23:36.061 ] 00:23:36.061 }' 00:23:36.061 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.061 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.626 [2024-11-20 07:21:00.726282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.626 "name": "Existed_Raid", 00:23:36.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.626 "strip_size_kb": 0, 00:23:36.626 "state": "configuring", 00:23:36.626 "raid_level": "raid1", 00:23:36.626 "superblock": false, 00:23:36.626 "num_base_bdevs": 3, 00:23:36.626 "num_base_bdevs_discovered": 1, 00:23:36.626 "num_base_bdevs_operational": 3, 00:23:36.626 "base_bdevs_list": [ 00:23:36.626 { 00:23:36.626 "name": "BaseBdev1", 00:23:36.626 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:36.626 "is_configured": true, 00:23:36.626 "data_offset": 0, 00:23:36.626 "data_size": 65536 00:23:36.626 }, 00:23:36.626 { 00:23:36.626 "name": null, 00:23:36.626 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:36.626 "is_configured": false, 00:23:36.626 "data_offset": 0, 00:23:36.626 "data_size": 65536 00:23:36.626 }, 00:23:36.626 { 00:23:36.626 "name": null, 00:23:36.626 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:36.626 "is_configured": false, 00:23:36.626 "data_offset": 0, 00:23:36.626 "data_size": 65536 00:23:36.626 } 00:23:36.626 ] 00:23:36.626 }' 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.626 07:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.191 [2024-11-20 07:21:01.298708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.191 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.192 "name": "Existed_Raid", 00:23:37.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.192 "strip_size_kb": 0, 00:23:37.192 "state": "configuring", 00:23:37.192 "raid_level": "raid1", 00:23:37.192 "superblock": false, 00:23:37.192 "num_base_bdevs": 3, 00:23:37.192 "num_base_bdevs_discovered": 2, 00:23:37.192 "num_base_bdevs_operational": 3, 00:23:37.192 "base_bdevs_list": [ 00:23:37.192 { 00:23:37.192 "name": "BaseBdev1", 00:23:37.192 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:37.192 "is_configured": true, 00:23:37.192 "data_offset": 0, 00:23:37.192 "data_size": 65536 00:23:37.192 }, 00:23:37.192 { 00:23:37.192 "name": null, 00:23:37.192 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:37.192 "is_configured": false, 00:23:37.192 "data_offset": 0, 00:23:37.192 "data_size": 65536 00:23:37.192 }, 00:23:37.192 { 00:23:37.192 "name": "BaseBdev3", 00:23:37.192 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:37.192 "is_configured": true, 00:23:37.192 "data_offset": 0, 00:23:37.192 "data_size": 65536 00:23:37.192 } 00:23:37.192 ] 00:23:37.192 }' 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.192 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.758 [2024-11-20 07:21:01.866879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.758 07:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.758 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.758 "name": "Existed_Raid", 00:23:37.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.758 "strip_size_kb": 0, 00:23:37.758 "state": "configuring", 00:23:37.758 "raid_level": "raid1", 00:23:37.758 "superblock": false, 00:23:37.758 "num_base_bdevs": 3, 00:23:37.758 "num_base_bdevs_discovered": 1, 00:23:37.758 "num_base_bdevs_operational": 3, 00:23:37.758 "base_bdevs_list": [ 00:23:37.758 { 00:23:37.758 "name": null, 00:23:37.758 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:37.758 "is_configured": false, 00:23:37.758 "data_offset": 0, 00:23:37.758 "data_size": 65536 00:23:37.758 }, 00:23:37.758 { 00:23:37.758 "name": null, 00:23:37.758 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:37.758 "is_configured": false, 00:23:37.758 "data_offset": 0, 00:23:37.758 "data_size": 65536 00:23:37.758 }, 00:23:37.758 { 00:23:37.758 "name": "BaseBdev3", 00:23:37.758 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:37.758 "is_configured": true, 00:23:37.758 "data_offset": 0, 00:23:37.758 "data_size": 65536 00:23:37.758 } 00:23:37.758 ] 00:23:37.758 }' 00:23:37.758 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.758 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.324 [2024-11-20 07:21:02.503848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.324 "name": "Existed_Raid", 00:23:38.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.324 "strip_size_kb": 0, 00:23:38.324 "state": "configuring", 00:23:38.324 "raid_level": "raid1", 00:23:38.324 "superblock": false, 00:23:38.324 "num_base_bdevs": 3, 00:23:38.324 "num_base_bdevs_discovered": 2, 00:23:38.324 "num_base_bdevs_operational": 3, 00:23:38.324 "base_bdevs_list": [ 00:23:38.324 { 00:23:38.324 "name": null, 00:23:38.324 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:38.324 "is_configured": false, 00:23:38.324 "data_offset": 0, 00:23:38.324 "data_size": 65536 00:23:38.324 }, 00:23:38.324 { 00:23:38.324 "name": "BaseBdev2", 00:23:38.324 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:38.324 "is_configured": true, 00:23:38.324 "data_offset": 0, 00:23:38.324 "data_size": 65536 00:23:38.324 }, 00:23:38.324 { 00:23:38.324 "name": "BaseBdev3", 00:23:38.324 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:38.324 "is_configured": true, 00:23:38.324 "data_offset": 0, 00:23:38.324 "data_size": 65536 00:23:38.324 } 00:23:38.324 ] 00:23:38.324 }' 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.324 07:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 [2024-11-20 07:21:03.150651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:38.892 [2024-11-20 07:21:03.150714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:38.892 [2024-11-20 07:21:03.150727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:38.892 [2024-11-20 07:21:03.151047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:38.892 [2024-11-20 07:21:03.151250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:38.892 [2024-11-20 07:21:03.151272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:38.892 [2024-11-20 07:21:03.151570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.892 NewBaseBdev 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.892 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 [ 00:23:38.892 { 00:23:38.892 "name": "NewBaseBdev", 00:23:38.892 "aliases": [ 00:23:38.892 "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193" 00:23:38.892 ], 00:23:38.892 "product_name": "Malloc disk", 00:23:38.892 "block_size": 512, 00:23:38.892 "num_blocks": 65536, 00:23:38.892 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:38.892 "assigned_rate_limits": { 00:23:38.892 "rw_ios_per_sec": 0, 00:23:38.892 "rw_mbytes_per_sec": 0, 00:23:38.892 "r_mbytes_per_sec": 0, 00:23:38.892 "w_mbytes_per_sec": 0 00:23:38.892 }, 00:23:38.892 "claimed": true, 00:23:38.892 "claim_type": "exclusive_write", 00:23:38.892 "zoned": false, 00:23:38.892 "supported_io_types": { 00:23:38.892 "read": true, 00:23:38.892 "write": true, 00:23:38.892 "unmap": true, 00:23:38.892 "flush": true, 00:23:38.892 "reset": true, 00:23:38.892 "nvme_admin": false, 00:23:38.892 "nvme_io": false, 00:23:38.892 "nvme_io_md": false, 00:23:38.892 "write_zeroes": true, 00:23:38.893 "zcopy": true, 00:23:38.893 "get_zone_info": false, 00:23:38.893 "zone_management": false, 00:23:38.893 "zone_append": false, 00:23:38.893 "compare": false, 00:23:38.893 "compare_and_write": false, 00:23:38.893 "abort": true, 00:23:38.893 "seek_hole": false, 00:23:38.893 "seek_data": false, 00:23:39.151 "copy": true, 00:23:39.151 "nvme_iov_md": false 00:23:39.151 }, 00:23:39.151 "memory_domains": [ 00:23:39.151 { 00:23:39.151 "dma_device_id": "system", 00:23:39.151 "dma_device_type": 1 00:23:39.151 }, 00:23:39.151 { 00:23:39.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.151 "dma_device_type": 2 00:23:39.151 } 00:23:39.151 ], 00:23:39.151 "driver_specific": {} 00:23:39.151 } 00:23:39.151 ] 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.151 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.151 "name": "Existed_Raid", 00:23:39.151 "uuid": "d1f01758-39a0-4a76-b4b9-75e35e43b922", 00:23:39.151 "strip_size_kb": 0, 00:23:39.151 "state": "online", 00:23:39.151 "raid_level": "raid1", 00:23:39.151 "superblock": false, 00:23:39.152 "num_base_bdevs": 3, 00:23:39.152 "num_base_bdevs_discovered": 3, 00:23:39.152 "num_base_bdevs_operational": 3, 00:23:39.152 "base_bdevs_list": [ 00:23:39.152 { 00:23:39.152 "name": "NewBaseBdev", 00:23:39.152 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:39.152 "is_configured": true, 00:23:39.152 "data_offset": 0, 00:23:39.152 "data_size": 65536 00:23:39.152 }, 00:23:39.152 { 00:23:39.152 "name": "BaseBdev2", 00:23:39.152 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:39.152 "is_configured": true, 00:23:39.152 "data_offset": 0, 00:23:39.152 "data_size": 65536 00:23:39.152 }, 00:23:39.152 { 00:23:39.152 "name": "BaseBdev3", 00:23:39.152 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:39.152 "is_configured": true, 00:23:39.152 "data_offset": 0, 00:23:39.152 "data_size": 65536 00:23:39.152 } 00:23:39.152 ] 00:23:39.152 }' 00:23:39.152 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.152 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:39.410 [2024-11-20 07:21:03.663212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.410 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.670 "name": "Existed_Raid", 00:23:39.670 "aliases": [ 00:23:39.670 "d1f01758-39a0-4a76-b4b9-75e35e43b922" 00:23:39.670 ], 00:23:39.670 "product_name": "Raid Volume", 00:23:39.670 "block_size": 512, 00:23:39.670 "num_blocks": 65536, 00:23:39.670 "uuid": "d1f01758-39a0-4a76-b4b9-75e35e43b922", 00:23:39.670 "assigned_rate_limits": { 00:23:39.670 "rw_ios_per_sec": 0, 00:23:39.670 "rw_mbytes_per_sec": 0, 00:23:39.670 "r_mbytes_per_sec": 0, 00:23:39.670 "w_mbytes_per_sec": 0 00:23:39.670 }, 00:23:39.670 "claimed": false, 00:23:39.670 "zoned": false, 00:23:39.670 "supported_io_types": { 00:23:39.670 "read": true, 00:23:39.670 "write": true, 00:23:39.670 "unmap": false, 00:23:39.670 "flush": false, 00:23:39.670 "reset": true, 00:23:39.670 "nvme_admin": false, 00:23:39.670 "nvme_io": false, 00:23:39.670 "nvme_io_md": false, 00:23:39.670 "write_zeroes": true, 00:23:39.670 "zcopy": false, 00:23:39.670 "get_zone_info": false, 00:23:39.670 "zone_management": false, 00:23:39.670 "zone_append": false, 00:23:39.670 "compare": false, 00:23:39.670 "compare_and_write": false, 00:23:39.670 "abort": false, 00:23:39.670 "seek_hole": false, 00:23:39.670 "seek_data": false, 00:23:39.670 "copy": false, 00:23:39.670 "nvme_iov_md": false 00:23:39.670 }, 00:23:39.670 "memory_domains": [ 00:23:39.670 { 00:23:39.670 "dma_device_id": "system", 00:23:39.670 "dma_device_type": 1 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.670 "dma_device_type": 2 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "dma_device_id": "system", 00:23:39.670 "dma_device_type": 1 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.670 "dma_device_type": 2 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "dma_device_id": "system", 00:23:39.670 "dma_device_type": 1 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.670 "dma_device_type": 2 00:23:39.670 } 00:23:39.670 ], 00:23:39.670 "driver_specific": { 00:23:39.670 "raid": { 00:23:39.670 "uuid": "d1f01758-39a0-4a76-b4b9-75e35e43b922", 00:23:39.670 "strip_size_kb": 0, 00:23:39.670 "state": "online", 00:23:39.670 "raid_level": "raid1", 00:23:39.670 "superblock": false, 00:23:39.670 "num_base_bdevs": 3, 00:23:39.670 "num_base_bdevs_discovered": 3, 00:23:39.670 "num_base_bdevs_operational": 3, 00:23:39.670 "base_bdevs_list": [ 00:23:39.670 { 00:23:39.670 "name": "NewBaseBdev", 00:23:39.670 "uuid": "06e38b32-f0f7-4cd1-ad8a-57f4e7c7c193", 00:23:39.670 "is_configured": true, 00:23:39.670 "data_offset": 0, 00:23:39.670 "data_size": 65536 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "name": "BaseBdev2", 00:23:39.670 "uuid": "3f502b35-2319-47a4-857f-a928517a9770", 00:23:39.670 "is_configured": true, 00:23:39.670 "data_offset": 0, 00:23:39.670 "data_size": 65536 00:23:39.670 }, 00:23:39.670 { 00:23:39.670 "name": "BaseBdev3", 00:23:39.670 "uuid": "992e87db-1ef6-405b-822b-3bc28709f878", 00:23:39.670 "is_configured": true, 00:23:39.670 "data_offset": 0, 00:23:39.670 "data_size": 65536 00:23:39.670 } 00:23:39.670 ] 00:23:39.670 } 00:23:39.670 } 00:23:39.670 }' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:39.670 BaseBdev2 00:23:39.670 BaseBdev3' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.670 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.929 [2024-11-20 07:21:03.990935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:39.929 [2024-11-20 07:21:03.990978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.929 [2024-11-20 07:21:03.991078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.929 [2024-11-20 07:21:03.991449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.929 [2024-11-20 07:21:03.991466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67659 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67659 ']' 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67659 00:23:39.929 07:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67659 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.929 killing process with pid 67659 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67659' 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67659 00:23:39.929 [2024-11-20 07:21:04.035465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:39.929 07:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67659 00:23:40.188 [2024-11-20 07:21:04.310522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:41.135 ************************************ 00:23:41.135 END TEST raid_state_function_test 00:23:41.135 ************************************ 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:41.135 00:23:41.135 real 0m11.618s 00:23:41.135 user 0m19.250s 00:23:41.135 sys 0m1.592s 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.135 07:21:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:23:41.135 07:21:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:41.135 07:21:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.135 07:21:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:41.135 ************************************ 00:23:41.135 START TEST raid_state_function_test_sb 00:23:41.135 ************************************ 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:41.135 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:41.136 Process raid pid: 68291 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68291 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68291' 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68291 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68291 ']' 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.136 07:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.394 [2024-11-20 07:21:05.505918] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:41.394 [2024-11-20 07:21:05.507049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.652 [2024-11-20 07:21:05.693577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.652 [2024-11-20 07:21:05.824352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.910 [2024-11-20 07:21:06.031541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.910 [2024-11-20 07:21:06.031807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.169 [2024-11-20 07:21:06.426032] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:42.169 [2024-11-20 07:21:06.426096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:42.169 [2024-11-20 07:21:06.426114] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:42.169 [2024-11-20 07:21:06.426130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:42.169 [2024-11-20 07:21:06.426140] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:42.169 [2024-11-20 07:21:06.426154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.169 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.427 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.427 "name": "Existed_Raid", 00:23:42.427 "uuid": "7e7ec398-f139-49eb-a41a-151d1fbef0e3", 00:23:42.427 "strip_size_kb": 0, 00:23:42.427 "state": "configuring", 00:23:42.427 "raid_level": "raid1", 00:23:42.427 "superblock": true, 00:23:42.427 "num_base_bdevs": 3, 00:23:42.427 "num_base_bdevs_discovered": 0, 00:23:42.427 "num_base_bdevs_operational": 3, 00:23:42.427 "base_bdevs_list": [ 00:23:42.427 { 00:23:42.427 "name": "BaseBdev1", 00:23:42.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.427 "is_configured": false, 00:23:42.427 "data_offset": 0, 00:23:42.427 "data_size": 0 00:23:42.427 }, 00:23:42.427 { 00:23:42.427 "name": "BaseBdev2", 00:23:42.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.427 "is_configured": false, 00:23:42.427 "data_offset": 0, 00:23:42.427 "data_size": 0 00:23:42.427 }, 00:23:42.427 { 00:23:42.427 "name": "BaseBdev3", 00:23:42.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.427 "is_configured": false, 00:23:42.427 "data_offset": 0, 00:23:42.427 "data_size": 0 00:23:42.427 } 00:23:42.427 ] 00:23:42.427 }' 00:23:42.427 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.427 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.685 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:42.685 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.685 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.944 [2024-11-20 07:21:06.974103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:42.944 [2024-11-20 07:21:06.974149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.944 [2024-11-20 07:21:06.982078] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:42.944 [2024-11-20 07:21:06.982132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:42.944 [2024-11-20 07:21:06.982148] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:42.944 [2024-11-20 07:21:06.982164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:42.944 [2024-11-20 07:21:06.982174] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:42.944 [2024-11-20 07:21:06.982188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.944 07:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.944 [2024-11-20 07:21:07.027147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:42.944 BaseBdev1 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.944 [ 00:23:42.944 { 00:23:42.944 "name": "BaseBdev1", 00:23:42.944 "aliases": [ 00:23:42.944 "11dab4bb-c254-482d-92a7-cfca246d89bf" 00:23:42.944 ], 00:23:42.944 "product_name": "Malloc disk", 00:23:42.944 "block_size": 512, 00:23:42.944 "num_blocks": 65536, 00:23:42.944 "uuid": "11dab4bb-c254-482d-92a7-cfca246d89bf", 00:23:42.944 "assigned_rate_limits": { 00:23:42.944 "rw_ios_per_sec": 0, 00:23:42.944 "rw_mbytes_per_sec": 0, 00:23:42.944 "r_mbytes_per_sec": 0, 00:23:42.944 "w_mbytes_per_sec": 0 00:23:42.944 }, 00:23:42.944 "claimed": true, 00:23:42.944 "claim_type": "exclusive_write", 00:23:42.944 "zoned": false, 00:23:42.944 "supported_io_types": { 00:23:42.944 "read": true, 00:23:42.944 "write": true, 00:23:42.944 "unmap": true, 00:23:42.944 "flush": true, 00:23:42.944 "reset": true, 00:23:42.944 "nvme_admin": false, 00:23:42.944 "nvme_io": false, 00:23:42.944 "nvme_io_md": false, 00:23:42.944 "write_zeroes": true, 00:23:42.944 "zcopy": true, 00:23:42.944 "get_zone_info": false, 00:23:42.944 "zone_management": false, 00:23:42.944 "zone_append": false, 00:23:42.944 "compare": false, 00:23:42.944 "compare_and_write": false, 00:23:42.944 "abort": true, 00:23:42.944 "seek_hole": false, 00:23:42.944 "seek_data": false, 00:23:42.944 "copy": true, 00:23:42.944 "nvme_iov_md": false 00:23:42.944 }, 00:23:42.944 "memory_domains": [ 00:23:42.944 { 00:23:42.944 "dma_device_id": "system", 00:23:42.944 "dma_device_type": 1 00:23:42.944 }, 00:23:42.944 { 00:23:42.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.944 "dma_device_type": 2 00:23:42.944 } 00:23:42.944 ], 00:23:42.944 "driver_specific": {} 00:23:42.944 } 00:23:42.944 ] 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.944 "name": "Existed_Raid", 00:23:42.944 "uuid": "15c91a5e-7816-44fe-99fb-77dd8c0a5cbd", 00:23:42.944 "strip_size_kb": 0, 00:23:42.944 "state": "configuring", 00:23:42.944 "raid_level": "raid1", 00:23:42.944 "superblock": true, 00:23:42.944 "num_base_bdevs": 3, 00:23:42.944 "num_base_bdevs_discovered": 1, 00:23:42.944 "num_base_bdevs_operational": 3, 00:23:42.944 "base_bdevs_list": [ 00:23:42.944 { 00:23:42.944 "name": "BaseBdev1", 00:23:42.944 "uuid": "11dab4bb-c254-482d-92a7-cfca246d89bf", 00:23:42.944 "is_configured": true, 00:23:42.944 "data_offset": 2048, 00:23:42.944 "data_size": 63488 00:23:42.944 }, 00:23:42.944 { 00:23:42.944 "name": "BaseBdev2", 00:23:42.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.944 "is_configured": false, 00:23:42.944 "data_offset": 0, 00:23:42.944 "data_size": 0 00:23:42.944 }, 00:23:42.944 { 00:23:42.944 "name": "BaseBdev3", 00:23:42.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.944 "is_configured": false, 00:23:42.944 "data_offset": 0, 00:23:42.944 "data_size": 0 00:23:42.944 } 00:23:42.944 ] 00:23:42.944 }' 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.944 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.511 [2024-11-20 07:21:07.571330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:43.511 [2024-11-20 07:21:07.571393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.511 [2024-11-20 07:21:07.579377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.511 [2024-11-20 07:21:07.581766] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:43.511 [2024-11-20 07:21:07.581819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:43.511 [2024-11-20 07:21:07.581836] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:43.511 [2024-11-20 07:21:07.581851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:43.511 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.512 "name": "Existed_Raid", 00:23:43.512 "uuid": "0640121c-9941-4319-9c00-23a249bf006e", 00:23:43.512 "strip_size_kb": 0, 00:23:43.512 "state": "configuring", 00:23:43.512 "raid_level": "raid1", 00:23:43.512 "superblock": true, 00:23:43.512 "num_base_bdevs": 3, 00:23:43.512 "num_base_bdevs_discovered": 1, 00:23:43.512 "num_base_bdevs_operational": 3, 00:23:43.512 "base_bdevs_list": [ 00:23:43.512 { 00:23:43.512 "name": "BaseBdev1", 00:23:43.512 "uuid": "11dab4bb-c254-482d-92a7-cfca246d89bf", 00:23:43.512 "is_configured": true, 00:23:43.512 "data_offset": 2048, 00:23:43.512 "data_size": 63488 00:23:43.512 }, 00:23:43.512 { 00:23:43.512 "name": "BaseBdev2", 00:23:43.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.512 "is_configured": false, 00:23:43.512 "data_offset": 0, 00:23:43.512 "data_size": 0 00:23:43.512 }, 00:23:43.512 { 00:23:43.512 "name": "BaseBdev3", 00:23:43.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.512 "is_configured": false, 00:23:43.512 "data_offset": 0, 00:23:43.512 "data_size": 0 00:23:43.512 } 00:23:43.512 ] 00:23:43.512 }' 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.512 07:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.079 [2024-11-20 07:21:08.126502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:44.079 BaseBdev2 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.079 [ 00:23:44.079 { 00:23:44.079 "name": "BaseBdev2", 00:23:44.079 "aliases": [ 00:23:44.079 "e4b5677c-0498-4d7c-8912-b4f1261c951f" 00:23:44.079 ], 00:23:44.079 "product_name": "Malloc disk", 00:23:44.079 "block_size": 512, 00:23:44.079 "num_blocks": 65536, 00:23:44.079 "uuid": "e4b5677c-0498-4d7c-8912-b4f1261c951f", 00:23:44.079 "assigned_rate_limits": { 00:23:44.079 "rw_ios_per_sec": 0, 00:23:44.079 "rw_mbytes_per_sec": 0, 00:23:44.079 "r_mbytes_per_sec": 0, 00:23:44.079 "w_mbytes_per_sec": 0 00:23:44.079 }, 00:23:44.079 "claimed": true, 00:23:44.079 "claim_type": "exclusive_write", 00:23:44.079 "zoned": false, 00:23:44.079 "supported_io_types": { 00:23:44.079 "read": true, 00:23:44.079 "write": true, 00:23:44.079 "unmap": true, 00:23:44.079 "flush": true, 00:23:44.079 "reset": true, 00:23:44.079 "nvme_admin": false, 00:23:44.079 "nvme_io": false, 00:23:44.079 "nvme_io_md": false, 00:23:44.079 "write_zeroes": true, 00:23:44.079 "zcopy": true, 00:23:44.079 "get_zone_info": false, 00:23:44.079 "zone_management": false, 00:23:44.079 "zone_append": false, 00:23:44.079 "compare": false, 00:23:44.079 "compare_and_write": false, 00:23:44.079 "abort": true, 00:23:44.079 "seek_hole": false, 00:23:44.079 "seek_data": false, 00:23:44.079 "copy": true, 00:23:44.079 "nvme_iov_md": false 00:23:44.079 }, 00:23:44.079 "memory_domains": [ 00:23:44.079 { 00:23:44.079 "dma_device_id": "system", 00:23:44.079 "dma_device_type": 1 00:23:44.079 }, 00:23:44.079 { 00:23:44.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.079 "dma_device_type": 2 00:23:44.079 } 00:23:44.079 ], 00:23:44.079 "driver_specific": {} 00:23:44.079 } 00:23:44.079 ] 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.079 "name": "Existed_Raid", 00:23:44.079 "uuid": "0640121c-9941-4319-9c00-23a249bf006e", 00:23:44.079 "strip_size_kb": 0, 00:23:44.079 "state": "configuring", 00:23:44.079 "raid_level": "raid1", 00:23:44.079 "superblock": true, 00:23:44.079 "num_base_bdevs": 3, 00:23:44.079 "num_base_bdevs_discovered": 2, 00:23:44.079 "num_base_bdevs_operational": 3, 00:23:44.079 "base_bdevs_list": [ 00:23:44.079 { 00:23:44.079 "name": "BaseBdev1", 00:23:44.079 "uuid": "11dab4bb-c254-482d-92a7-cfca246d89bf", 00:23:44.079 "is_configured": true, 00:23:44.079 "data_offset": 2048, 00:23:44.079 "data_size": 63488 00:23:44.079 }, 00:23:44.079 { 00:23:44.079 "name": "BaseBdev2", 00:23:44.079 "uuid": "e4b5677c-0498-4d7c-8912-b4f1261c951f", 00:23:44.079 "is_configured": true, 00:23:44.079 "data_offset": 2048, 00:23:44.079 "data_size": 63488 00:23:44.079 }, 00:23:44.079 { 00:23:44.079 "name": "BaseBdev3", 00:23:44.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.079 "is_configured": false, 00:23:44.079 "data_offset": 0, 00:23:44.079 "data_size": 0 00:23:44.079 } 00:23:44.079 ] 00:23:44.079 }' 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.079 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.646 [2024-11-20 07:21:08.731705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:44.646 [2024-11-20 07:21:08.732099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:44.646 [2024-11-20 07:21:08.732139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:44.646 BaseBdev3 00:23:44.646 [2024-11-20 07:21:08.732726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:44.646 [2024-11-20 07:21:08.732994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:44.646 [2024-11-20 07:21:08.733014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.646 [2024-11-20 07:21:08.733230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.646 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.646 [ 00:23:44.646 { 00:23:44.646 "name": "BaseBdev3", 00:23:44.646 "aliases": [ 00:23:44.646 "487b9a09-57f5-4af2-bc22-6ee05d2bc80d" 00:23:44.646 ], 00:23:44.646 "product_name": "Malloc disk", 00:23:44.647 "block_size": 512, 00:23:44.647 "num_blocks": 65536, 00:23:44.647 "uuid": "487b9a09-57f5-4af2-bc22-6ee05d2bc80d", 00:23:44.647 "assigned_rate_limits": { 00:23:44.647 "rw_ios_per_sec": 0, 00:23:44.647 "rw_mbytes_per_sec": 0, 00:23:44.647 "r_mbytes_per_sec": 0, 00:23:44.647 "w_mbytes_per_sec": 0 00:23:44.647 }, 00:23:44.647 "claimed": true, 00:23:44.647 "claim_type": "exclusive_write", 00:23:44.647 "zoned": false, 00:23:44.647 "supported_io_types": { 00:23:44.647 "read": true, 00:23:44.647 "write": true, 00:23:44.647 "unmap": true, 00:23:44.647 "flush": true, 00:23:44.647 "reset": true, 00:23:44.647 "nvme_admin": false, 00:23:44.647 "nvme_io": false, 00:23:44.647 "nvme_io_md": false, 00:23:44.647 "write_zeroes": true, 00:23:44.647 "zcopy": true, 00:23:44.647 "get_zone_info": false, 00:23:44.647 "zone_management": false, 00:23:44.647 "zone_append": false, 00:23:44.647 "compare": false, 00:23:44.647 "compare_and_write": false, 00:23:44.647 "abort": true, 00:23:44.647 "seek_hole": false, 00:23:44.647 "seek_data": false, 00:23:44.647 "copy": true, 00:23:44.647 "nvme_iov_md": false 00:23:44.647 }, 00:23:44.647 "memory_domains": [ 00:23:44.647 { 00:23:44.647 "dma_device_id": "system", 00:23:44.647 "dma_device_type": 1 00:23:44.647 }, 00:23:44.647 { 00:23:44.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.647 "dma_device_type": 2 00:23:44.647 } 00:23:44.647 ], 00:23:44.647 "driver_specific": {} 00:23:44.647 } 00:23:44.647 ] 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.647 "name": "Existed_Raid", 00:23:44.647 "uuid": "0640121c-9941-4319-9c00-23a249bf006e", 00:23:44.647 "strip_size_kb": 0, 00:23:44.647 "state": "online", 00:23:44.647 "raid_level": "raid1", 00:23:44.647 "superblock": true, 00:23:44.647 "num_base_bdevs": 3, 00:23:44.647 "num_base_bdevs_discovered": 3, 00:23:44.647 "num_base_bdevs_operational": 3, 00:23:44.647 "base_bdevs_list": [ 00:23:44.647 { 00:23:44.647 "name": "BaseBdev1", 00:23:44.647 "uuid": "11dab4bb-c254-482d-92a7-cfca246d89bf", 00:23:44.647 "is_configured": true, 00:23:44.647 "data_offset": 2048, 00:23:44.647 "data_size": 63488 00:23:44.647 }, 00:23:44.647 { 00:23:44.647 "name": "BaseBdev2", 00:23:44.647 "uuid": "e4b5677c-0498-4d7c-8912-b4f1261c951f", 00:23:44.647 "is_configured": true, 00:23:44.647 "data_offset": 2048, 00:23:44.647 "data_size": 63488 00:23:44.647 }, 00:23:44.647 { 00:23:44.647 "name": "BaseBdev3", 00:23:44.647 "uuid": "487b9a09-57f5-4af2-bc22-6ee05d2bc80d", 00:23:44.647 "is_configured": true, 00:23:44.647 "data_offset": 2048, 00:23:44.647 "data_size": 63488 00:23:44.647 } 00:23:44.647 ] 00:23:44.647 }' 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.647 07:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.214 [2024-11-20 07:21:09.276294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.214 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.214 "name": "Existed_Raid", 00:23:45.214 "aliases": [ 00:23:45.214 "0640121c-9941-4319-9c00-23a249bf006e" 00:23:45.214 ], 00:23:45.214 "product_name": "Raid Volume", 00:23:45.214 "block_size": 512, 00:23:45.214 "num_blocks": 63488, 00:23:45.214 "uuid": "0640121c-9941-4319-9c00-23a249bf006e", 00:23:45.214 "assigned_rate_limits": { 00:23:45.214 "rw_ios_per_sec": 0, 00:23:45.214 "rw_mbytes_per_sec": 0, 00:23:45.214 "r_mbytes_per_sec": 0, 00:23:45.214 "w_mbytes_per_sec": 0 00:23:45.214 }, 00:23:45.214 "claimed": false, 00:23:45.214 "zoned": false, 00:23:45.214 "supported_io_types": { 00:23:45.214 "read": true, 00:23:45.214 "write": true, 00:23:45.214 "unmap": false, 00:23:45.214 "flush": false, 00:23:45.214 "reset": true, 00:23:45.214 "nvme_admin": false, 00:23:45.214 "nvme_io": false, 00:23:45.214 "nvme_io_md": false, 00:23:45.214 "write_zeroes": true, 00:23:45.214 "zcopy": false, 00:23:45.214 "get_zone_info": false, 00:23:45.214 "zone_management": false, 00:23:45.214 "zone_append": false, 00:23:45.214 "compare": false, 00:23:45.214 "compare_and_write": false, 00:23:45.214 "abort": false, 00:23:45.214 "seek_hole": false, 00:23:45.214 "seek_data": false, 00:23:45.214 "copy": false, 00:23:45.214 "nvme_iov_md": false 00:23:45.214 }, 00:23:45.214 "memory_domains": [ 00:23:45.214 { 00:23:45.214 "dma_device_id": "system", 00:23:45.214 "dma_device_type": 1 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.214 "dma_device_type": 2 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "dma_device_id": "system", 00:23:45.214 "dma_device_type": 1 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.214 "dma_device_type": 2 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "dma_device_id": "system", 00:23:45.214 "dma_device_type": 1 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.214 "dma_device_type": 2 00:23:45.214 } 00:23:45.214 ], 00:23:45.214 "driver_specific": { 00:23:45.214 "raid": { 00:23:45.214 "uuid": "0640121c-9941-4319-9c00-23a249bf006e", 00:23:45.214 "strip_size_kb": 0, 00:23:45.214 "state": "online", 00:23:45.214 "raid_level": "raid1", 00:23:45.214 "superblock": true, 00:23:45.214 "num_base_bdevs": 3, 00:23:45.214 "num_base_bdevs_discovered": 3, 00:23:45.214 "num_base_bdevs_operational": 3, 00:23:45.214 "base_bdevs_list": [ 00:23:45.214 { 00:23:45.214 "name": "BaseBdev1", 00:23:45.214 "uuid": "11dab4bb-c254-482d-92a7-cfca246d89bf", 00:23:45.214 "is_configured": true, 00:23:45.214 "data_offset": 2048, 00:23:45.214 "data_size": 63488 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "name": "BaseBdev2", 00:23:45.214 "uuid": "e4b5677c-0498-4d7c-8912-b4f1261c951f", 00:23:45.214 "is_configured": true, 00:23:45.214 "data_offset": 2048, 00:23:45.214 "data_size": 63488 00:23:45.214 }, 00:23:45.214 { 00:23:45.214 "name": "BaseBdev3", 00:23:45.214 "uuid": "487b9a09-57f5-4af2-bc22-6ee05d2bc80d", 00:23:45.214 "is_configured": true, 00:23:45.215 "data_offset": 2048, 00:23:45.215 "data_size": 63488 00:23:45.215 } 00:23:45.215 ] 00:23:45.215 } 00:23:45.215 } 00:23:45.215 }' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:45.215 BaseBdev2 00:23:45.215 BaseBdev3' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.215 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.473 [2024-11-20 07:21:09.584063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.473 "name": "Existed_Raid", 00:23:45.473 "uuid": "0640121c-9941-4319-9c00-23a249bf006e", 00:23:45.473 "strip_size_kb": 0, 00:23:45.473 "state": "online", 00:23:45.473 "raid_level": "raid1", 00:23:45.473 "superblock": true, 00:23:45.473 "num_base_bdevs": 3, 00:23:45.473 "num_base_bdevs_discovered": 2, 00:23:45.473 "num_base_bdevs_operational": 2, 00:23:45.473 "base_bdevs_list": [ 00:23:45.473 { 00:23:45.473 "name": null, 00:23:45.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.473 "is_configured": false, 00:23:45.473 "data_offset": 0, 00:23:45.473 "data_size": 63488 00:23:45.473 }, 00:23:45.473 { 00:23:45.473 "name": "BaseBdev2", 00:23:45.473 "uuid": "e4b5677c-0498-4d7c-8912-b4f1261c951f", 00:23:45.473 "is_configured": true, 00:23:45.473 "data_offset": 2048, 00:23:45.473 "data_size": 63488 00:23:45.473 }, 00:23:45.473 { 00:23:45.473 "name": "BaseBdev3", 00:23:45.473 "uuid": "487b9a09-57f5-4af2-bc22-6ee05d2bc80d", 00:23:45.473 "is_configured": true, 00:23:45.473 "data_offset": 2048, 00:23:45.473 "data_size": 63488 00:23:45.473 } 00:23:45.473 ] 00:23:45.473 }' 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.473 07:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.039 [2024-11-20 07:21:10.235537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.039 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.298 [2024-11-20 07:21:10.373415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:46.298 [2024-11-20 07:21:10.373547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.298 [2024-11-20 07:21:10.460434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.298 [2024-11-20 07:21:10.460798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.298 [2024-11-20 07:21:10.460960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.298 BaseBdev2 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.298 [ 00:23:46.298 { 00:23:46.298 "name": "BaseBdev2", 00:23:46.298 "aliases": [ 00:23:46.298 "66d5d6f9-6990-4656-88f7-0f7b310cd3d5" 00:23:46.298 ], 00:23:46.298 "product_name": "Malloc disk", 00:23:46.298 "block_size": 512, 00:23:46.298 "num_blocks": 65536, 00:23:46.298 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:46.298 "assigned_rate_limits": { 00:23:46.298 "rw_ios_per_sec": 0, 00:23:46.298 "rw_mbytes_per_sec": 0, 00:23:46.298 "r_mbytes_per_sec": 0, 00:23:46.298 "w_mbytes_per_sec": 0 00:23:46.298 }, 00:23:46.298 "claimed": false, 00:23:46.298 "zoned": false, 00:23:46.298 "supported_io_types": { 00:23:46.298 "read": true, 00:23:46.298 "write": true, 00:23:46.298 "unmap": true, 00:23:46.298 "flush": true, 00:23:46.298 "reset": true, 00:23:46.298 "nvme_admin": false, 00:23:46.298 "nvme_io": false, 00:23:46.298 "nvme_io_md": false, 00:23:46.298 "write_zeroes": true, 00:23:46.298 "zcopy": true, 00:23:46.298 "get_zone_info": false, 00:23:46.298 "zone_management": false, 00:23:46.298 "zone_append": false, 00:23:46.298 "compare": false, 00:23:46.298 "compare_and_write": false, 00:23:46.298 "abort": true, 00:23:46.298 "seek_hole": false, 00:23:46.298 "seek_data": false, 00:23:46.298 "copy": true, 00:23:46.298 "nvme_iov_md": false 00:23:46.298 }, 00:23:46.298 "memory_domains": [ 00:23:46.298 { 00:23:46.298 "dma_device_id": "system", 00:23:46.298 "dma_device_type": 1 00:23:46.298 }, 00:23:46.298 { 00:23:46.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.298 "dma_device_type": 2 00:23:46.298 } 00:23:46.298 ], 00:23:46.298 "driver_specific": {} 00:23:46.298 } 00:23:46.298 ] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.298 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 BaseBdev3 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 [ 00:23:46.557 { 00:23:46.557 "name": "BaseBdev3", 00:23:46.557 "aliases": [ 00:23:46.557 "0f6f62e8-ca6a-4066-899f-f5a54e7374dd" 00:23:46.557 ], 00:23:46.557 "product_name": "Malloc disk", 00:23:46.557 "block_size": 512, 00:23:46.557 "num_blocks": 65536, 00:23:46.557 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:46.557 "assigned_rate_limits": { 00:23:46.557 "rw_ios_per_sec": 0, 00:23:46.557 "rw_mbytes_per_sec": 0, 00:23:46.557 "r_mbytes_per_sec": 0, 00:23:46.557 "w_mbytes_per_sec": 0 00:23:46.557 }, 00:23:46.557 "claimed": false, 00:23:46.557 "zoned": false, 00:23:46.557 "supported_io_types": { 00:23:46.557 "read": true, 00:23:46.557 "write": true, 00:23:46.557 "unmap": true, 00:23:46.557 "flush": true, 00:23:46.557 "reset": true, 00:23:46.557 "nvme_admin": false, 00:23:46.557 "nvme_io": false, 00:23:46.557 "nvme_io_md": false, 00:23:46.557 "write_zeroes": true, 00:23:46.557 "zcopy": true, 00:23:46.557 "get_zone_info": false, 00:23:46.557 "zone_management": false, 00:23:46.557 "zone_append": false, 00:23:46.557 "compare": false, 00:23:46.557 "compare_and_write": false, 00:23:46.557 "abort": true, 00:23:46.557 "seek_hole": false, 00:23:46.557 "seek_data": false, 00:23:46.557 "copy": true, 00:23:46.557 "nvme_iov_md": false 00:23:46.557 }, 00:23:46.557 "memory_domains": [ 00:23:46.557 { 00:23:46.557 "dma_device_id": "system", 00:23:46.557 "dma_device_type": 1 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.557 "dma_device_type": 2 00:23:46.557 } 00:23:46.557 ], 00:23:46.557 "driver_specific": {} 00:23:46.557 } 00:23:46.557 ] 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 [2024-11-20 07:21:10.659289] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:46.557 [2024-11-20 07:21:10.659482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:46.557 [2024-11-20 07:21:10.659658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:46.557 [2024-11-20 07:21:10.662181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.557 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.558 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.558 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.558 "name": "Existed_Raid", 00:23:46.558 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:46.558 "strip_size_kb": 0, 00:23:46.558 "state": "configuring", 00:23:46.558 "raid_level": "raid1", 00:23:46.558 "superblock": true, 00:23:46.558 "num_base_bdevs": 3, 00:23:46.558 "num_base_bdevs_discovered": 2, 00:23:46.558 "num_base_bdevs_operational": 3, 00:23:46.558 "base_bdevs_list": [ 00:23:46.558 { 00:23:46.558 "name": "BaseBdev1", 00:23:46.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.558 "is_configured": false, 00:23:46.558 "data_offset": 0, 00:23:46.558 "data_size": 0 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "name": "BaseBdev2", 00:23:46.558 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:46.558 "is_configured": true, 00:23:46.558 "data_offset": 2048, 00:23:46.558 "data_size": 63488 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "name": "BaseBdev3", 00:23:46.558 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:46.558 "is_configured": true, 00:23:46.558 "data_offset": 2048, 00:23:46.558 "data_size": 63488 00:23:46.558 } 00:23:46.558 ] 00:23:46.558 }' 00:23:46.558 07:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.558 07:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.123 [2024-11-20 07:21:11.143472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.123 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.123 "name": "Existed_Raid", 00:23:47.123 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:47.123 "strip_size_kb": 0, 00:23:47.123 "state": "configuring", 00:23:47.123 "raid_level": "raid1", 00:23:47.123 "superblock": true, 00:23:47.123 "num_base_bdevs": 3, 00:23:47.123 "num_base_bdevs_discovered": 1, 00:23:47.123 "num_base_bdevs_operational": 3, 00:23:47.123 "base_bdevs_list": [ 00:23:47.123 { 00:23:47.123 "name": "BaseBdev1", 00:23:47.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.123 "is_configured": false, 00:23:47.123 "data_offset": 0, 00:23:47.123 "data_size": 0 00:23:47.123 }, 00:23:47.123 { 00:23:47.123 "name": null, 00:23:47.123 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:47.124 "is_configured": false, 00:23:47.124 "data_offset": 0, 00:23:47.124 "data_size": 63488 00:23:47.124 }, 00:23:47.124 { 00:23:47.124 "name": "BaseBdev3", 00:23:47.124 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:47.124 "is_configured": true, 00:23:47.124 "data_offset": 2048, 00:23:47.124 "data_size": 63488 00:23:47.124 } 00:23:47.124 ] 00:23:47.124 }' 00:23:47.124 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.124 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.382 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.382 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:47.382 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.382 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.382 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.640 [2024-11-20 07:21:11.737872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.640 BaseBdev1 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.640 [ 00:23:47.640 { 00:23:47.640 "name": "BaseBdev1", 00:23:47.640 "aliases": [ 00:23:47.640 "86a9c51d-e51c-465c-9caa-fcbc6e8f368a" 00:23:47.640 ], 00:23:47.640 "product_name": "Malloc disk", 00:23:47.640 "block_size": 512, 00:23:47.640 "num_blocks": 65536, 00:23:47.640 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:47.640 "assigned_rate_limits": { 00:23:47.640 "rw_ios_per_sec": 0, 00:23:47.640 "rw_mbytes_per_sec": 0, 00:23:47.640 "r_mbytes_per_sec": 0, 00:23:47.640 "w_mbytes_per_sec": 0 00:23:47.640 }, 00:23:47.640 "claimed": true, 00:23:47.640 "claim_type": "exclusive_write", 00:23:47.640 "zoned": false, 00:23:47.640 "supported_io_types": { 00:23:47.640 "read": true, 00:23:47.640 "write": true, 00:23:47.640 "unmap": true, 00:23:47.640 "flush": true, 00:23:47.640 "reset": true, 00:23:47.640 "nvme_admin": false, 00:23:47.640 "nvme_io": false, 00:23:47.640 "nvme_io_md": false, 00:23:47.640 "write_zeroes": true, 00:23:47.640 "zcopy": true, 00:23:47.640 "get_zone_info": false, 00:23:47.640 "zone_management": false, 00:23:47.640 "zone_append": false, 00:23:47.640 "compare": false, 00:23:47.640 "compare_and_write": false, 00:23:47.640 "abort": true, 00:23:47.640 "seek_hole": false, 00:23:47.640 "seek_data": false, 00:23:47.640 "copy": true, 00:23:47.640 "nvme_iov_md": false 00:23:47.640 }, 00:23:47.640 "memory_domains": [ 00:23:47.640 { 00:23:47.640 "dma_device_id": "system", 00:23:47.640 "dma_device_type": 1 00:23:47.640 }, 00:23:47.640 { 00:23:47.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.640 "dma_device_type": 2 00:23:47.640 } 00:23:47.640 ], 00:23:47.640 "driver_specific": {} 00:23:47.640 } 00:23:47.640 ] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.640 "name": "Existed_Raid", 00:23:47.640 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:47.640 "strip_size_kb": 0, 00:23:47.640 "state": "configuring", 00:23:47.640 "raid_level": "raid1", 00:23:47.640 "superblock": true, 00:23:47.640 "num_base_bdevs": 3, 00:23:47.640 "num_base_bdevs_discovered": 2, 00:23:47.640 "num_base_bdevs_operational": 3, 00:23:47.640 "base_bdevs_list": [ 00:23:47.640 { 00:23:47.640 "name": "BaseBdev1", 00:23:47.640 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:47.640 "is_configured": true, 00:23:47.640 "data_offset": 2048, 00:23:47.640 "data_size": 63488 00:23:47.640 }, 00:23:47.640 { 00:23:47.640 "name": null, 00:23:47.640 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:47.640 "is_configured": false, 00:23:47.640 "data_offset": 0, 00:23:47.640 "data_size": 63488 00:23:47.640 }, 00:23:47.640 { 00:23:47.640 "name": "BaseBdev3", 00:23:47.640 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:47.640 "is_configured": true, 00:23:47.640 "data_offset": 2048, 00:23:47.640 "data_size": 63488 00:23:47.640 } 00:23:47.640 ] 00:23:47.640 }' 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.640 07:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.303 [2024-11-20 07:21:12.346085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.303 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.303 "name": "Existed_Raid", 00:23:48.303 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:48.303 "strip_size_kb": 0, 00:23:48.303 "state": "configuring", 00:23:48.303 "raid_level": "raid1", 00:23:48.303 "superblock": true, 00:23:48.303 "num_base_bdevs": 3, 00:23:48.303 "num_base_bdevs_discovered": 1, 00:23:48.303 "num_base_bdevs_operational": 3, 00:23:48.303 "base_bdevs_list": [ 00:23:48.303 { 00:23:48.303 "name": "BaseBdev1", 00:23:48.303 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:48.303 "is_configured": true, 00:23:48.303 "data_offset": 2048, 00:23:48.303 "data_size": 63488 00:23:48.303 }, 00:23:48.303 { 00:23:48.303 "name": null, 00:23:48.303 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:48.303 "is_configured": false, 00:23:48.303 "data_offset": 0, 00:23:48.303 "data_size": 63488 00:23:48.303 }, 00:23:48.303 { 00:23:48.303 "name": null, 00:23:48.303 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:48.304 "is_configured": false, 00:23:48.304 "data_offset": 0, 00:23:48.304 "data_size": 63488 00:23:48.304 } 00:23:48.304 ] 00:23:48.304 }' 00:23:48.304 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.304 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 [2024-11-20 07:21:12.898324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.870 "name": "Existed_Raid", 00:23:48.870 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:48.870 "strip_size_kb": 0, 00:23:48.870 "state": "configuring", 00:23:48.870 "raid_level": "raid1", 00:23:48.870 "superblock": true, 00:23:48.870 "num_base_bdevs": 3, 00:23:48.870 "num_base_bdevs_discovered": 2, 00:23:48.870 "num_base_bdevs_operational": 3, 00:23:48.870 "base_bdevs_list": [ 00:23:48.870 { 00:23:48.870 "name": "BaseBdev1", 00:23:48.870 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:48.870 "is_configured": true, 00:23:48.870 "data_offset": 2048, 00:23:48.870 "data_size": 63488 00:23:48.870 }, 00:23:48.870 { 00:23:48.870 "name": null, 00:23:48.870 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:48.870 "is_configured": false, 00:23:48.870 "data_offset": 0, 00:23:48.870 "data_size": 63488 00:23:48.870 }, 00:23:48.870 { 00:23:48.870 "name": "BaseBdev3", 00:23:48.870 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:48.870 "is_configured": true, 00:23:48.870 "data_offset": 2048, 00:23:48.870 "data_size": 63488 00:23:48.870 } 00:23:48.870 ] 00:23:48.870 }' 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.870 07:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.129 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.129 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.129 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.129 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.388 [2024-11-20 07:21:13.462525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.388 "name": "Existed_Raid", 00:23:49.388 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:49.388 "strip_size_kb": 0, 00:23:49.388 "state": "configuring", 00:23:49.388 "raid_level": "raid1", 00:23:49.388 "superblock": true, 00:23:49.388 "num_base_bdevs": 3, 00:23:49.388 "num_base_bdevs_discovered": 1, 00:23:49.388 "num_base_bdevs_operational": 3, 00:23:49.388 "base_bdevs_list": [ 00:23:49.388 { 00:23:49.388 "name": null, 00:23:49.388 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:49.388 "is_configured": false, 00:23:49.388 "data_offset": 0, 00:23:49.388 "data_size": 63488 00:23:49.388 }, 00:23:49.388 { 00:23:49.388 "name": null, 00:23:49.388 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:49.388 "is_configured": false, 00:23:49.388 "data_offset": 0, 00:23:49.388 "data_size": 63488 00:23:49.388 }, 00:23:49.388 { 00:23:49.388 "name": "BaseBdev3", 00:23:49.388 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:49.388 "is_configured": true, 00:23:49.388 "data_offset": 2048, 00:23:49.388 "data_size": 63488 00:23:49.388 } 00:23:49.388 ] 00:23:49.388 }' 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.388 07:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.956 [2024-11-20 07:21:14.148326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.956 "name": "Existed_Raid", 00:23:49.956 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:49.956 "strip_size_kb": 0, 00:23:49.956 "state": "configuring", 00:23:49.956 "raid_level": "raid1", 00:23:49.956 "superblock": true, 00:23:49.956 "num_base_bdevs": 3, 00:23:49.956 "num_base_bdevs_discovered": 2, 00:23:49.956 "num_base_bdevs_operational": 3, 00:23:49.956 "base_bdevs_list": [ 00:23:49.956 { 00:23:49.956 "name": null, 00:23:49.956 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:49.956 "is_configured": false, 00:23:49.956 "data_offset": 0, 00:23:49.956 "data_size": 63488 00:23:49.956 }, 00:23:49.956 { 00:23:49.956 "name": "BaseBdev2", 00:23:49.956 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:49.956 "is_configured": true, 00:23:49.956 "data_offset": 2048, 00:23:49.956 "data_size": 63488 00:23:49.956 }, 00:23:49.956 { 00:23:49.956 "name": "BaseBdev3", 00:23:49.956 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:49.956 "is_configured": true, 00:23:49.956 "data_offset": 2048, 00:23:49.956 "data_size": 63488 00:23:49.956 } 00:23:49.956 ] 00:23:49.956 }' 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.956 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 86a9c51d-e51c-465c-9caa-fcbc6e8f368a 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.524 [2024-11-20 07:21:14.804157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:50.524 [2024-11-20 07:21:14.804433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:50.524 [2024-11-20 07:21:14.804452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:50.524 [2024-11-20 07:21:14.804826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:50.524 NewBaseBdev 00:23:50.524 [2024-11-20 07:21:14.805037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:50.524 [2024-11-20 07:21:14.805060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:50.524 [2024-11-20 07:21:14.805223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.524 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.783 [ 00:23:50.783 { 00:23:50.783 "name": "NewBaseBdev", 00:23:50.783 "aliases": [ 00:23:50.783 "86a9c51d-e51c-465c-9caa-fcbc6e8f368a" 00:23:50.783 ], 00:23:50.783 "product_name": "Malloc disk", 00:23:50.783 "block_size": 512, 00:23:50.783 "num_blocks": 65536, 00:23:50.783 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:50.783 "assigned_rate_limits": { 00:23:50.783 "rw_ios_per_sec": 0, 00:23:50.783 "rw_mbytes_per_sec": 0, 00:23:50.783 "r_mbytes_per_sec": 0, 00:23:50.783 "w_mbytes_per_sec": 0 00:23:50.783 }, 00:23:50.783 "claimed": true, 00:23:50.783 "claim_type": "exclusive_write", 00:23:50.783 "zoned": false, 00:23:50.783 "supported_io_types": { 00:23:50.783 "read": true, 00:23:50.783 "write": true, 00:23:50.783 "unmap": true, 00:23:50.783 "flush": true, 00:23:50.783 "reset": true, 00:23:50.783 "nvme_admin": false, 00:23:50.783 "nvme_io": false, 00:23:50.783 "nvme_io_md": false, 00:23:50.783 "write_zeroes": true, 00:23:50.783 "zcopy": true, 00:23:50.783 "get_zone_info": false, 00:23:50.783 "zone_management": false, 00:23:50.783 "zone_append": false, 00:23:50.783 "compare": false, 00:23:50.783 "compare_and_write": false, 00:23:50.783 "abort": true, 00:23:50.783 "seek_hole": false, 00:23:50.783 "seek_data": false, 00:23:50.783 "copy": true, 00:23:50.783 "nvme_iov_md": false 00:23:50.783 }, 00:23:50.783 "memory_domains": [ 00:23:50.783 { 00:23:50.783 "dma_device_id": "system", 00:23:50.783 "dma_device_type": 1 00:23:50.783 }, 00:23:50.783 { 00:23:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.783 "dma_device_type": 2 00:23:50.783 } 00:23:50.783 ], 00:23:50.783 "driver_specific": {} 00:23:50.783 } 00:23:50.783 ] 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.783 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.784 "name": "Existed_Raid", 00:23:50.784 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:50.784 "strip_size_kb": 0, 00:23:50.784 "state": "online", 00:23:50.784 "raid_level": "raid1", 00:23:50.784 "superblock": true, 00:23:50.784 "num_base_bdevs": 3, 00:23:50.784 "num_base_bdevs_discovered": 3, 00:23:50.784 "num_base_bdevs_operational": 3, 00:23:50.784 "base_bdevs_list": [ 00:23:50.784 { 00:23:50.784 "name": "NewBaseBdev", 00:23:50.784 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:50.784 "is_configured": true, 00:23:50.784 "data_offset": 2048, 00:23:50.784 "data_size": 63488 00:23:50.784 }, 00:23:50.784 { 00:23:50.784 "name": "BaseBdev2", 00:23:50.784 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:50.784 "is_configured": true, 00:23:50.784 "data_offset": 2048, 00:23:50.784 "data_size": 63488 00:23:50.784 }, 00:23:50.784 { 00:23:50.784 "name": "BaseBdev3", 00:23:50.784 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:50.784 "is_configured": true, 00:23:50.784 "data_offset": 2048, 00:23:50.784 "data_size": 63488 00:23:50.784 } 00:23:50.784 ] 00:23:50.784 }' 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.784 07:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.351 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.352 [2024-11-20 07:21:15.404785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.352 "name": "Existed_Raid", 00:23:51.352 "aliases": [ 00:23:51.352 "3f85b235-fa1b-4e4f-8597-79dd8222e070" 00:23:51.352 ], 00:23:51.352 "product_name": "Raid Volume", 00:23:51.352 "block_size": 512, 00:23:51.352 "num_blocks": 63488, 00:23:51.352 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:51.352 "assigned_rate_limits": { 00:23:51.352 "rw_ios_per_sec": 0, 00:23:51.352 "rw_mbytes_per_sec": 0, 00:23:51.352 "r_mbytes_per_sec": 0, 00:23:51.352 "w_mbytes_per_sec": 0 00:23:51.352 }, 00:23:51.352 "claimed": false, 00:23:51.352 "zoned": false, 00:23:51.352 "supported_io_types": { 00:23:51.352 "read": true, 00:23:51.352 "write": true, 00:23:51.352 "unmap": false, 00:23:51.352 "flush": false, 00:23:51.352 "reset": true, 00:23:51.352 "nvme_admin": false, 00:23:51.352 "nvme_io": false, 00:23:51.352 "nvme_io_md": false, 00:23:51.352 "write_zeroes": true, 00:23:51.352 "zcopy": false, 00:23:51.352 "get_zone_info": false, 00:23:51.352 "zone_management": false, 00:23:51.352 "zone_append": false, 00:23:51.352 "compare": false, 00:23:51.352 "compare_and_write": false, 00:23:51.352 "abort": false, 00:23:51.352 "seek_hole": false, 00:23:51.352 "seek_data": false, 00:23:51.352 "copy": false, 00:23:51.352 "nvme_iov_md": false 00:23:51.352 }, 00:23:51.352 "memory_domains": [ 00:23:51.352 { 00:23:51.352 "dma_device_id": "system", 00:23:51.352 "dma_device_type": 1 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.352 "dma_device_type": 2 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "dma_device_id": "system", 00:23:51.352 "dma_device_type": 1 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.352 "dma_device_type": 2 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "dma_device_id": "system", 00:23:51.352 "dma_device_type": 1 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.352 "dma_device_type": 2 00:23:51.352 } 00:23:51.352 ], 00:23:51.352 "driver_specific": { 00:23:51.352 "raid": { 00:23:51.352 "uuid": "3f85b235-fa1b-4e4f-8597-79dd8222e070", 00:23:51.352 "strip_size_kb": 0, 00:23:51.352 "state": "online", 00:23:51.352 "raid_level": "raid1", 00:23:51.352 "superblock": true, 00:23:51.352 "num_base_bdevs": 3, 00:23:51.352 "num_base_bdevs_discovered": 3, 00:23:51.352 "num_base_bdevs_operational": 3, 00:23:51.352 "base_bdevs_list": [ 00:23:51.352 { 00:23:51.352 "name": "NewBaseBdev", 00:23:51.352 "uuid": "86a9c51d-e51c-465c-9caa-fcbc6e8f368a", 00:23:51.352 "is_configured": true, 00:23:51.352 "data_offset": 2048, 00:23:51.352 "data_size": 63488 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "name": "BaseBdev2", 00:23:51.352 "uuid": "66d5d6f9-6990-4656-88f7-0f7b310cd3d5", 00:23:51.352 "is_configured": true, 00:23:51.352 "data_offset": 2048, 00:23:51.352 "data_size": 63488 00:23:51.352 }, 00:23:51.352 { 00:23:51.352 "name": "BaseBdev3", 00:23:51.352 "uuid": "0f6f62e8-ca6a-4066-899f-f5a54e7374dd", 00:23:51.352 "is_configured": true, 00:23:51.352 "data_offset": 2048, 00:23:51.352 "data_size": 63488 00:23:51.352 } 00:23:51.352 ] 00:23:51.352 } 00:23:51.352 } 00:23:51.352 }' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:51.352 BaseBdev2 00:23:51.352 BaseBdev3' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.352 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.611 [2024-11-20 07:21:15.852512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:51.611 [2024-11-20 07:21:15.852558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.611 [2024-11-20 07:21:15.852678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.611 [2024-11-20 07:21:15.853061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:51.611 [2024-11-20 07:21:15.853088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68291 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68291 ']' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68291 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68291 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.611 killing process with pid 68291 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68291' 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68291 00:23:51.611 [2024-11-20 07:21:15.885994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:51.611 07:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68291 00:23:51.871 [2024-11-20 07:21:16.159506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:53.248 07:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:53.248 00:23:53.248 real 0m11.820s 00:23:53.248 user 0m19.655s 00:23:53.248 sys 0m1.572s 00:23:53.248 07:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.248 07:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.248 ************************************ 00:23:53.248 END TEST raid_state_function_test_sb 00:23:53.248 ************************************ 00:23:53.248 07:21:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:23:53.248 07:21:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:53.248 07:21:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.249 07:21:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:53.249 ************************************ 00:23:53.249 START TEST raid_superblock_test 00:23:53.249 ************************************ 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68923 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68923 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68923 ']' 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.249 07:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.249 [2024-11-20 07:21:17.359108] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:53.249 [2024-11-20 07:21:17.359279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68923 ] 00:23:53.249 [2024-11-20 07:21:17.533711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.507 [2024-11-20 07:21:17.668443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.766 [2024-11-20 07:21:17.876457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:53.766 [2024-11-20 07:21:17.876497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.335 malloc1 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.335 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.335 [2024-11-20 07:21:18.440730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:54.335 [2024-11-20 07:21:18.440813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.336 [2024-11-20 07:21:18.440862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:54.336 [2024-11-20 07:21:18.440879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.336 [2024-11-20 07:21:18.443983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.336 [2024-11-20 07:21:18.444060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:54.336 pt1 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 malloc2 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 [2024-11-20 07:21:18.499687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:54.336 [2024-11-20 07:21:18.499762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.336 [2024-11-20 07:21:18.499796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:54.336 [2024-11-20 07:21:18.499811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.336 [2024-11-20 07:21:18.502709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.336 [2024-11-20 07:21:18.502755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:54.336 pt2 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 malloc3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 [2024-11-20 07:21:18.572756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:54.336 [2024-11-20 07:21:18.572829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.336 [2024-11-20 07:21:18.572870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:54.336 [2024-11-20 07:21:18.572888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.336 [2024-11-20 07:21:18.575790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.336 [2024-11-20 07:21:18.575837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:54.336 pt3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 [2024-11-20 07:21:18.580816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:54.336 [2024-11-20 07:21:18.583247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:54.336 [2024-11-20 07:21:18.583351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:54.336 [2024-11-20 07:21:18.583577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:54.336 [2024-11-20 07:21:18.583650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:54.336 [2024-11-20 07:21:18.583988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:54.336 [2024-11-20 07:21:18.584232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:54.336 [2024-11-20 07:21:18.584263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:54.336 [2024-11-20 07:21:18.584466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.595 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.595 "name": "raid_bdev1", 00:23:54.595 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:54.595 "strip_size_kb": 0, 00:23:54.595 "state": "online", 00:23:54.595 "raid_level": "raid1", 00:23:54.595 "superblock": true, 00:23:54.595 "num_base_bdevs": 3, 00:23:54.595 "num_base_bdevs_discovered": 3, 00:23:54.595 "num_base_bdevs_operational": 3, 00:23:54.595 "base_bdevs_list": [ 00:23:54.595 { 00:23:54.595 "name": "pt1", 00:23:54.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:54.595 "is_configured": true, 00:23:54.595 "data_offset": 2048, 00:23:54.595 "data_size": 63488 00:23:54.595 }, 00:23:54.595 { 00:23:54.595 "name": "pt2", 00:23:54.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:54.595 "is_configured": true, 00:23:54.595 "data_offset": 2048, 00:23:54.595 "data_size": 63488 00:23:54.595 }, 00:23:54.595 { 00:23:54.595 "name": "pt3", 00:23:54.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:54.595 "is_configured": true, 00:23:54.595 "data_offset": 2048, 00:23:54.595 "data_size": 63488 00:23:54.595 } 00:23:54.595 ] 00:23:54.595 }' 00:23:54.595 07:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.595 07:21:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.931 [2024-11-20 07:21:19.105336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.931 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.931 "name": "raid_bdev1", 00:23:54.931 "aliases": [ 00:23:54.931 "60e33761-0384-42c9-8589-29c491553ca2" 00:23:54.931 ], 00:23:54.931 "product_name": "Raid Volume", 00:23:54.931 "block_size": 512, 00:23:54.931 "num_blocks": 63488, 00:23:54.931 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:54.931 "assigned_rate_limits": { 00:23:54.931 "rw_ios_per_sec": 0, 00:23:54.931 "rw_mbytes_per_sec": 0, 00:23:54.931 "r_mbytes_per_sec": 0, 00:23:54.931 "w_mbytes_per_sec": 0 00:23:54.931 }, 00:23:54.931 "claimed": false, 00:23:54.931 "zoned": false, 00:23:54.931 "supported_io_types": { 00:23:54.931 "read": true, 00:23:54.931 "write": true, 00:23:54.931 "unmap": false, 00:23:54.931 "flush": false, 00:23:54.931 "reset": true, 00:23:54.931 "nvme_admin": false, 00:23:54.931 "nvme_io": false, 00:23:54.932 "nvme_io_md": false, 00:23:54.932 "write_zeroes": true, 00:23:54.932 "zcopy": false, 00:23:54.932 "get_zone_info": false, 00:23:54.932 "zone_management": false, 00:23:54.932 "zone_append": false, 00:23:54.932 "compare": false, 00:23:54.932 "compare_and_write": false, 00:23:54.932 "abort": false, 00:23:54.932 "seek_hole": false, 00:23:54.932 "seek_data": false, 00:23:54.932 "copy": false, 00:23:54.932 "nvme_iov_md": false 00:23:54.932 }, 00:23:54.932 "memory_domains": [ 00:23:54.932 { 00:23:54.932 "dma_device_id": "system", 00:23:54.932 "dma_device_type": 1 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.932 "dma_device_type": 2 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "dma_device_id": "system", 00:23:54.932 "dma_device_type": 1 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.932 "dma_device_type": 2 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "dma_device_id": "system", 00:23:54.932 "dma_device_type": 1 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.932 "dma_device_type": 2 00:23:54.932 } 00:23:54.932 ], 00:23:54.932 "driver_specific": { 00:23:54.932 "raid": { 00:23:54.932 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:54.932 "strip_size_kb": 0, 00:23:54.932 "state": "online", 00:23:54.932 "raid_level": "raid1", 00:23:54.932 "superblock": true, 00:23:54.932 "num_base_bdevs": 3, 00:23:54.932 "num_base_bdevs_discovered": 3, 00:23:54.932 "num_base_bdevs_operational": 3, 00:23:54.932 "base_bdevs_list": [ 00:23:54.932 { 00:23:54.932 "name": "pt1", 00:23:54.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:54.932 "is_configured": true, 00:23:54.932 "data_offset": 2048, 00:23:54.932 "data_size": 63488 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "name": "pt2", 00:23:54.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:54.932 "is_configured": true, 00:23:54.932 "data_offset": 2048, 00:23:54.932 "data_size": 63488 00:23:54.932 }, 00:23:54.932 { 00:23:54.932 "name": "pt3", 00:23:54.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:54.932 "is_configured": true, 00:23:54.932 "data_offset": 2048, 00:23:54.932 "data_size": 63488 00:23:54.932 } 00:23:54.932 ] 00:23:54.932 } 00:23:54.932 } 00:23:54.932 }' 00:23:54.932 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:54.932 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:54.932 pt2 00:23:54.932 pt3' 00:23:54.932 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 [2024-11-20 07:21:19.417403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=60e33761-0384-42c9-8589-29c491553ca2 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 60e33761-0384-42c9-8589-29c491553ca2 ']' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 [2024-11-20 07:21:19.461036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.201 [2024-11-20 07:21:19.461077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.201 [2024-11-20 07:21:19.461176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.201 [2024-11-20 07:21:19.461276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.201 [2024-11-20 07:21:19.461293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 [2024-11-20 07:21:19.597142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:55.466 [2024-11-20 07:21:19.599692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:55.466 [2024-11-20 07:21:19.599774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:55.466 [2024-11-20 07:21:19.599849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:55.466 [2024-11-20 07:21:19.599927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:55.466 [2024-11-20 07:21:19.599962] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:55.466 [2024-11-20 07:21:19.599990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.466 [2024-11-20 07:21:19.600005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:55.466 request: 00:23:55.466 { 00:23:55.466 "name": "raid_bdev1", 00:23:55.466 "raid_level": "raid1", 00:23:55.466 "base_bdevs": [ 00:23:55.466 "malloc1", 00:23:55.466 "malloc2", 00:23:55.466 "malloc3" 00:23:55.466 ], 00:23:55.466 "superblock": false, 00:23:55.466 "method": "bdev_raid_create", 00:23:55.466 "req_id": 1 00:23:55.466 } 00:23:55.466 Got JSON-RPC error response 00:23:55.466 response: 00:23:55.466 { 00:23:55.466 "code": -17, 00:23:55.466 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:55.466 } 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 [2024-11-20 07:21:19.661093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:55.466 [2024-11-20 07:21:19.661197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.466 [2024-11-20 07:21:19.661232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:55.466 [2024-11-20 07:21:19.661247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.466 [2024-11-20 07:21:19.664248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.466 [2024-11-20 07:21:19.664307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:55.466 [2024-11-20 07:21:19.664426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:55.466 [2024-11-20 07:21:19.664510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:55.466 pt1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.466 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.467 "name": "raid_bdev1", 00:23:55.467 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:55.467 "strip_size_kb": 0, 00:23:55.467 "state": "configuring", 00:23:55.467 "raid_level": "raid1", 00:23:55.467 "superblock": true, 00:23:55.467 "num_base_bdevs": 3, 00:23:55.467 "num_base_bdevs_discovered": 1, 00:23:55.467 "num_base_bdevs_operational": 3, 00:23:55.467 "base_bdevs_list": [ 00:23:55.467 { 00:23:55.467 "name": "pt1", 00:23:55.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:55.467 "is_configured": true, 00:23:55.467 "data_offset": 2048, 00:23:55.467 "data_size": 63488 00:23:55.467 }, 00:23:55.467 { 00:23:55.467 "name": null, 00:23:55.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:55.467 "is_configured": false, 00:23:55.467 "data_offset": 2048, 00:23:55.467 "data_size": 63488 00:23:55.467 }, 00:23:55.467 { 00:23:55.467 "name": null, 00:23:55.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:55.467 "is_configured": false, 00:23:55.467 "data_offset": 2048, 00:23:55.467 "data_size": 63488 00:23:55.467 } 00:23:55.467 ] 00:23:55.467 }' 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.467 07:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.034 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:23:56.034 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:56.034 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.034 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.034 [2024-11-20 07:21:20.197324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:56.034 [2024-11-20 07:21:20.197432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.034 [2024-11-20 07:21:20.197467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:56.034 [2024-11-20 07:21:20.197483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.034 [2024-11-20 07:21:20.198094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.035 [2024-11-20 07:21:20.198137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:56.035 [2024-11-20 07:21:20.198249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:56.035 [2024-11-20 07:21:20.198282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:56.035 pt2 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.035 [2024-11-20 07:21:20.205289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.035 "name": "raid_bdev1", 00:23:56.035 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:56.035 "strip_size_kb": 0, 00:23:56.035 "state": "configuring", 00:23:56.035 "raid_level": "raid1", 00:23:56.035 "superblock": true, 00:23:56.035 "num_base_bdevs": 3, 00:23:56.035 "num_base_bdevs_discovered": 1, 00:23:56.035 "num_base_bdevs_operational": 3, 00:23:56.035 "base_bdevs_list": [ 00:23:56.035 { 00:23:56.035 "name": "pt1", 00:23:56.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:56.035 "is_configured": true, 00:23:56.035 "data_offset": 2048, 00:23:56.035 "data_size": 63488 00:23:56.035 }, 00:23:56.035 { 00:23:56.035 "name": null, 00:23:56.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.035 "is_configured": false, 00:23:56.035 "data_offset": 0, 00:23:56.035 "data_size": 63488 00:23:56.035 }, 00:23:56.035 { 00:23:56.035 "name": null, 00:23:56.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:56.035 "is_configured": false, 00:23:56.035 "data_offset": 2048, 00:23:56.035 "data_size": 63488 00:23:56.035 } 00:23:56.035 ] 00:23:56.035 }' 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.035 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.602 [2024-11-20 07:21:20.709421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:56.602 [2024-11-20 07:21:20.709535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.602 [2024-11-20 07:21:20.709563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:56.602 [2024-11-20 07:21:20.709581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.602 [2024-11-20 07:21:20.710195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.602 [2024-11-20 07:21:20.710239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:56.602 [2024-11-20 07:21:20.710340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:56.602 [2024-11-20 07:21:20.710395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:56.602 pt2 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.602 [2024-11-20 07:21:20.721431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:56.602 [2024-11-20 07:21:20.721487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.602 [2024-11-20 07:21:20.721516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:56.602 [2024-11-20 07:21:20.721535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.602 [2024-11-20 07:21:20.721992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.602 [2024-11-20 07:21:20.722042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:56.602 [2024-11-20 07:21:20.722120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:56.602 [2024-11-20 07:21:20.722153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:56.602 [2024-11-20 07:21:20.722305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:56.602 [2024-11-20 07:21:20.722340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:56.602 [2024-11-20 07:21:20.722656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:56.602 [2024-11-20 07:21:20.722887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:56.602 [2024-11-20 07:21:20.722904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:56.602 [2024-11-20 07:21:20.723075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.602 pt3 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.602 "name": "raid_bdev1", 00:23:56.602 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:56.602 "strip_size_kb": 0, 00:23:56.602 "state": "online", 00:23:56.602 "raid_level": "raid1", 00:23:56.602 "superblock": true, 00:23:56.602 "num_base_bdevs": 3, 00:23:56.602 "num_base_bdevs_discovered": 3, 00:23:56.602 "num_base_bdevs_operational": 3, 00:23:56.602 "base_bdevs_list": [ 00:23:56.602 { 00:23:56.602 "name": "pt1", 00:23:56.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:56.602 "is_configured": true, 00:23:56.602 "data_offset": 2048, 00:23:56.602 "data_size": 63488 00:23:56.602 }, 00:23:56.602 { 00:23:56.602 "name": "pt2", 00:23:56.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.602 "is_configured": true, 00:23:56.602 "data_offset": 2048, 00:23:56.602 "data_size": 63488 00:23:56.602 }, 00:23:56.602 { 00:23:56.602 "name": "pt3", 00:23:56.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:56.602 "is_configured": true, 00:23:56.602 "data_offset": 2048, 00:23:56.602 "data_size": 63488 00:23:56.602 } 00:23:56.602 ] 00:23:56.602 }' 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.602 07:21:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.169 [2024-11-20 07:21:21.198087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:57.169 "name": "raid_bdev1", 00:23:57.169 "aliases": [ 00:23:57.169 "60e33761-0384-42c9-8589-29c491553ca2" 00:23:57.169 ], 00:23:57.169 "product_name": "Raid Volume", 00:23:57.169 "block_size": 512, 00:23:57.169 "num_blocks": 63488, 00:23:57.169 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:57.169 "assigned_rate_limits": { 00:23:57.169 "rw_ios_per_sec": 0, 00:23:57.169 "rw_mbytes_per_sec": 0, 00:23:57.169 "r_mbytes_per_sec": 0, 00:23:57.169 "w_mbytes_per_sec": 0 00:23:57.169 }, 00:23:57.169 "claimed": false, 00:23:57.169 "zoned": false, 00:23:57.169 "supported_io_types": { 00:23:57.169 "read": true, 00:23:57.169 "write": true, 00:23:57.169 "unmap": false, 00:23:57.169 "flush": false, 00:23:57.169 "reset": true, 00:23:57.169 "nvme_admin": false, 00:23:57.169 "nvme_io": false, 00:23:57.169 "nvme_io_md": false, 00:23:57.169 "write_zeroes": true, 00:23:57.169 "zcopy": false, 00:23:57.169 "get_zone_info": false, 00:23:57.169 "zone_management": false, 00:23:57.169 "zone_append": false, 00:23:57.169 "compare": false, 00:23:57.169 "compare_and_write": false, 00:23:57.169 "abort": false, 00:23:57.169 "seek_hole": false, 00:23:57.169 "seek_data": false, 00:23:57.169 "copy": false, 00:23:57.169 "nvme_iov_md": false 00:23:57.169 }, 00:23:57.169 "memory_domains": [ 00:23:57.169 { 00:23:57.169 "dma_device_id": "system", 00:23:57.169 "dma_device_type": 1 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.169 "dma_device_type": 2 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "dma_device_id": "system", 00:23:57.169 "dma_device_type": 1 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.169 "dma_device_type": 2 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "dma_device_id": "system", 00:23:57.169 "dma_device_type": 1 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.169 "dma_device_type": 2 00:23:57.169 } 00:23:57.169 ], 00:23:57.169 "driver_specific": { 00:23:57.169 "raid": { 00:23:57.169 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:57.169 "strip_size_kb": 0, 00:23:57.169 "state": "online", 00:23:57.169 "raid_level": "raid1", 00:23:57.169 "superblock": true, 00:23:57.169 "num_base_bdevs": 3, 00:23:57.169 "num_base_bdevs_discovered": 3, 00:23:57.169 "num_base_bdevs_operational": 3, 00:23:57.169 "base_bdevs_list": [ 00:23:57.169 { 00:23:57.169 "name": "pt1", 00:23:57.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.169 "is_configured": true, 00:23:57.169 "data_offset": 2048, 00:23:57.169 "data_size": 63488 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "name": "pt2", 00:23:57.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.169 "is_configured": true, 00:23:57.169 "data_offset": 2048, 00:23:57.169 "data_size": 63488 00:23:57.169 }, 00:23:57.169 { 00:23:57.169 "name": "pt3", 00:23:57.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.169 "is_configured": true, 00:23:57.169 "data_offset": 2048, 00:23:57.169 "data_size": 63488 00:23:57.169 } 00:23:57.169 ] 00:23:57.169 } 00:23:57.169 } 00:23:57.169 }' 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:57.169 pt2 00:23:57.169 pt3' 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:57.169 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.170 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.429 [2024-11-20 07:21:21.510117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 60e33761-0384-42c9-8589-29c491553ca2 '!=' 60e33761-0384-42c9-8589-29c491553ca2 ']' 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.429 [2024-11-20 07:21:21.561863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.429 "name": "raid_bdev1", 00:23:57.429 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:57.429 "strip_size_kb": 0, 00:23:57.429 "state": "online", 00:23:57.429 "raid_level": "raid1", 00:23:57.429 "superblock": true, 00:23:57.429 "num_base_bdevs": 3, 00:23:57.429 "num_base_bdevs_discovered": 2, 00:23:57.429 "num_base_bdevs_operational": 2, 00:23:57.429 "base_bdevs_list": [ 00:23:57.429 { 00:23:57.429 "name": null, 00:23:57.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.429 "is_configured": false, 00:23:57.429 "data_offset": 0, 00:23:57.429 "data_size": 63488 00:23:57.429 }, 00:23:57.429 { 00:23:57.429 "name": "pt2", 00:23:57.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.429 "is_configured": true, 00:23:57.429 "data_offset": 2048, 00:23:57.429 "data_size": 63488 00:23:57.429 }, 00:23:57.429 { 00:23:57.429 "name": "pt3", 00:23:57.429 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.429 "is_configured": true, 00:23:57.429 "data_offset": 2048, 00:23:57.429 "data_size": 63488 00:23:57.429 } 00:23:57.429 ] 00:23:57.429 }' 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.429 07:21:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 [2024-11-20 07:21:22.081986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:57.997 [2024-11-20 07:21:22.082023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:57.997 [2024-11-20 07:21:22.082149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:57.997 [2024-11-20 07:21:22.082231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:57.997 [2024-11-20 07:21:22.082255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 [2024-11-20 07:21:22.161956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.997 [2024-11-20 07:21:22.162179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.997 [2024-11-20 07:21:22.162331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:57.997 [2024-11-20 07:21:22.162474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.997 [2024-11-20 07:21:22.165414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.997 [2024-11-20 07:21:22.165470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.997 [2024-11-20 07:21:22.165597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:57.997 [2024-11-20 07:21:22.165665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:57.997 pt2 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.997 "name": "raid_bdev1", 00:23:57.997 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:57.997 "strip_size_kb": 0, 00:23:57.997 "state": "configuring", 00:23:57.997 "raid_level": "raid1", 00:23:57.997 "superblock": true, 00:23:57.997 "num_base_bdevs": 3, 00:23:57.997 "num_base_bdevs_discovered": 1, 00:23:57.997 "num_base_bdevs_operational": 2, 00:23:57.997 "base_bdevs_list": [ 00:23:57.997 { 00:23:57.997 "name": null, 00:23:57.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.997 "is_configured": false, 00:23:57.997 "data_offset": 2048, 00:23:57.997 "data_size": 63488 00:23:57.997 }, 00:23:57.997 { 00:23:57.997 "name": "pt2", 00:23:57.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.997 "is_configured": true, 00:23:57.997 "data_offset": 2048, 00:23:57.997 "data_size": 63488 00:23:57.997 }, 00:23:57.997 { 00:23:57.997 "name": null, 00:23:57.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.997 "is_configured": false, 00:23:57.997 "data_offset": 2048, 00:23:57.997 "data_size": 63488 00:23:57.997 } 00:23:57.997 ] 00:23:57.997 }' 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.997 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.562 [2024-11-20 07:21:22.670133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:58.562 [2024-11-20 07:21:22.670415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.562 [2024-11-20 07:21:22.670492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:58.562 [2024-11-20 07:21:22.670635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.562 [2024-11-20 07:21:22.671269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.562 [2024-11-20 07:21:22.671438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:58.562 [2024-11-20 07:21:22.671573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:58.562 [2024-11-20 07:21:22.671636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:58.562 [2024-11-20 07:21:22.671785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:58.562 [2024-11-20 07:21:22.671807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:58.562 [2024-11-20 07:21:22.672133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:58.562 [2024-11-20 07:21:22.672349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:58.562 [2024-11-20 07:21:22.672364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:58.562 [2024-11-20 07:21:22.672549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.562 pt3 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.562 "name": "raid_bdev1", 00:23:58.562 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:58.562 "strip_size_kb": 0, 00:23:58.562 "state": "online", 00:23:58.562 "raid_level": "raid1", 00:23:58.562 "superblock": true, 00:23:58.562 "num_base_bdevs": 3, 00:23:58.562 "num_base_bdevs_discovered": 2, 00:23:58.562 "num_base_bdevs_operational": 2, 00:23:58.562 "base_bdevs_list": [ 00:23:58.562 { 00:23:58.562 "name": null, 00:23:58.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.562 "is_configured": false, 00:23:58.562 "data_offset": 2048, 00:23:58.562 "data_size": 63488 00:23:58.562 }, 00:23:58.562 { 00:23:58.562 "name": "pt2", 00:23:58.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.562 "is_configured": true, 00:23:58.562 "data_offset": 2048, 00:23:58.562 "data_size": 63488 00:23:58.562 }, 00:23:58.562 { 00:23:58.562 "name": "pt3", 00:23:58.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:58.562 "is_configured": true, 00:23:58.562 "data_offset": 2048, 00:23:58.562 "data_size": 63488 00:23:58.562 } 00:23:58.562 ] 00:23:58.562 }' 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.562 07:21:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.135 [2024-11-20 07:21:23.186250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.135 [2024-11-20 07:21:23.186419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.135 [2024-11-20 07:21:23.186540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.135 [2024-11-20 07:21:23.186649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.135 [2024-11-20 07:21:23.186668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.135 [2024-11-20 07:21:23.250270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:59.135 [2024-11-20 07:21:23.250338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.135 [2024-11-20 07:21:23.250371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:59.135 [2024-11-20 07:21:23.250385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.135 [2024-11-20 07:21:23.253355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.135 pt1 00:23:59.135 [2024-11-20 07:21:23.253560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:59.135 [2024-11-20 07:21:23.253706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:59.135 [2024-11-20 07:21:23.253770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:59.135 [2024-11-20 07:21:23.253948] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:59.135 [2024-11-20 07:21:23.253966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.135 [2024-11-20 07:21:23.253989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:59.135 [2024-11-20 07:21:23.254061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.135 "name": "raid_bdev1", 00:23:59.135 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:59.135 "strip_size_kb": 0, 00:23:59.135 "state": "configuring", 00:23:59.135 "raid_level": "raid1", 00:23:59.135 "superblock": true, 00:23:59.135 "num_base_bdevs": 3, 00:23:59.135 "num_base_bdevs_discovered": 1, 00:23:59.135 "num_base_bdevs_operational": 2, 00:23:59.135 "base_bdevs_list": [ 00:23:59.135 { 00:23:59.135 "name": null, 00:23:59.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.135 "is_configured": false, 00:23:59.135 "data_offset": 2048, 00:23:59.135 "data_size": 63488 00:23:59.135 }, 00:23:59.135 { 00:23:59.135 "name": "pt2", 00:23:59.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.135 "is_configured": true, 00:23:59.135 "data_offset": 2048, 00:23:59.135 "data_size": 63488 00:23:59.135 }, 00:23:59.135 { 00:23:59.135 "name": null, 00:23:59.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.135 "is_configured": false, 00:23:59.135 "data_offset": 2048, 00:23:59.135 "data_size": 63488 00:23:59.135 } 00:23:59.135 ] 00:23:59.135 }' 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.135 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.704 [2024-11-20 07:21:23.814509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:59.704 [2024-11-20 07:21:23.814636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.704 [2024-11-20 07:21:23.814673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:59.704 [2024-11-20 07:21:23.814689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.704 [2024-11-20 07:21:23.815324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.704 [2024-11-20 07:21:23.815374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:59.704 [2024-11-20 07:21:23.815483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:59.704 [2024-11-20 07:21:23.815552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:59.704 [2024-11-20 07:21:23.815731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:59.704 [2024-11-20 07:21:23.815748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:59.704 [2024-11-20 07:21:23.816069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:59.704 [2024-11-20 07:21:23.816281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:59.704 [2024-11-20 07:21:23.816302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:59.704 [2024-11-20 07:21:23.816469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.704 pt3 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.704 "name": "raid_bdev1", 00:23:59.704 "uuid": "60e33761-0384-42c9-8589-29c491553ca2", 00:23:59.704 "strip_size_kb": 0, 00:23:59.704 "state": "online", 00:23:59.704 "raid_level": "raid1", 00:23:59.704 "superblock": true, 00:23:59.704 "num_base_bdevs": 3, 00:23:59.704 "num_base_bdevs_discovered": 2, 00:23:59.704 "num_base_bdevs_operational": 2, 00:23:59.704 "base_bdevs_list": [ 00:23:59.704 { 00:23:59.704 "name": null, 00:23:59.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.704 "is_configured": false, 00:23:59.704 "data_offset": 2048, 00:23:59.704 "data_size": 63488 00:23:59.704 }, 00:23:59.704 { 00:23:59.704 "name": "pt2", 00:23:59.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.704 "is_configured": true, 00:23:59.704 "data_offset": 2048, 00:23:59.704 "data_size": 63488 00:23:59.704 }, 00:23:59.704 { 00:23:59.704 "name": "pt3", 00:23:59.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.704 "is_configured": true, 00:23:59.704 "data_offset": 2048, 00:23:59.704 "data_size": 63488 00:23:59.704 } 00:23:59.704 ] 00:23:59.704 }' 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.704 07:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:00.272 [2024-11-20 07:21:24.423030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 60e33761-0384-42c9-8589-29c491553ca2 '!=' 60e33761-0384-42c9-8589-29c491553ca2 ']' 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68923 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68923 ']' 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68923 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68923 00:24:00.272 killing process with pid 68923 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68923' 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68923 00:24:00.272 [2024-11-20 07:21:24.507092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:00.272 07:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68923 00:24:00.272 [2024-11-20 07:21:24.507243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.272 [2024-11-20 07:21:24.507323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.272 [2024-11-20 07:21:24.507342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:00.531 [2024-11-20 07:21:24.783726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:01.915 07:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:01.915 00:24:01.915 real 0m8.579s 00:24:01.915 user 0m14.031s 00:24:01.915 sys 0m1.195s 00:24:01.915 07:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.915 ************************************ 00:24:01.915 END TEST raid_superblock_test 00:24:01.915 ************************************ 00:24:01.915 07:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.915 07:21:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:24:01.915 07:21:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:01.915 07:21:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.915 07:21:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.915 ************************************ 00:24:01.915 START TEST raid_read_error_test 00:24:01.915 ************************************ 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c2ZnL9EbuK 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69375 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69375 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69375 ']' 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.915 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.916 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.916 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.916 07:21:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.916 [2024-11-20 07:21:26.010226] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:01.916 [2024-11-20 07:21:26.010603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69375 ] 00:24:01.916 [2024-11-20 07:21:26.185270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.174 [2024-11-20 07:21:26.319120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.433 [2024-11-20 07:21:26.530018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.433 [2024-11-20 07:21:26.530095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 BaseBdev1_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 true 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 [2024-11-20 07:21:27.092066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:03.001 [2024-11-20 07:21:27.092279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.001 [2024-11-20 07:21:27.092321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:03.001 [2024-11-20 07:21:27.092340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.001 [2024-11-20 07:21:27.095205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.001 [2024-11-20 07:21:27.095276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:03.001 BaseBdev1 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 BaseBdev2_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 true 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 [2024-11-20 07:21:27.150824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:03.001 [2024-11-20 07:21:27.151030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.001 [2024-11-20 07:21:27.151104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:03.001 [2024-11-20 07:21:27.151262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.001 [2024-11-20 07:21:27.154250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.001 BaseBdev2 00:24:03.001 [2024-11-20 07:21:27.154448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 BaseBdev3_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 true 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 [2024-11-20 07:21:27.224853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:03.001 [2024-11-20 07:21:27.225052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.001 [2024-11-20 07:21:27.225092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:03.001 [2024-11-20 07:21:27.225115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.001 [2024-11-20 07:21:27.227994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.001 [2024-11-20 07:21:27.228046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:03.001 BaseBdev3 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.002 [2024-11-20 07:21:27.233053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.002 [2024-11-20 07:21:27.235699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.002 [2024-11-20 07:21:27.235931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:03.002 [2024-11-20 07:21:27.236254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:03.002 [2024-11-20 07:21:27.236390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:03.002 [2024-11-20 07:21:27.236791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:24:03.002 [2024-11-20 07:21:27.237033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:03.002 [2024-11-20 07:21:27.237055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:03.002 [2024-11-20 07:21:27.237298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.002 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.260 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.260 "name": "raid_bdev1", 00:24:03.260 "uuid": "de65e202-699d-4e00-9d25-a93a1a2a013b", 00:24:03.260 "strip_size_kb": 0, 00:24:03.260 "state": "online", 00:24:03.260 "raid_level": "raid1", 00:24:03.260 "superblock": true, 00:24:03.260 "num_base_bdevs": 3, 00:24:03.260 "num_base_bdevs_discovered": 3, 00:24:03.260 "num_base_bdevs_operational": 3, 00:24:03.260 "base_bdevs_list": [ 00:24:03.260 { 00:24:03.260 "name": "BaseBdev1", 00:24:03.260 "uuid": "0e9772b2-db7b-5dca-b2ce-7b961fa7541b", 00:24:03.260 "is_configured": true, 00:24:03.260 "data_offset": 2048, 00:24:03.260 "data_size": 63488 00:24:03.260 }, 00:24:03.260 { 00:24:03.260 "name": "BaseBdev2", 00:24:03.260 "uuid": "6e5ff34d-8d11-5bb0-8456-2e9969a441cd", 00:24:03.260 "is_configured": true, 00:24:03.260 "data_offset": 2048, 00:24:03.260 "data_size": 63488 00:24:03.260 }, 00:24:03.260 { 00:24:03.260 "name": "BaseBdev3", 00:24:03.260 "uuid": "fce1bc3b-e628-55e4-a726-cf37b438aa9b", 00:24:03.260 "is_configured": true, 00:24:03.260 "data_offset": 2048, 00:24:03.260 "data_size": 63488 00:24:03.260 } 00:24:03.260 ] 00:24:03.260 }' 00:24:03.260 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.260 07:21:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.518 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:03.518 07:21:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:03.777 [2024-11-20 07:21:27.875010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.713 "name": "raid_bdev1", 00:24:04.713 "uuid": "de65e202-699d-4e00-9d25-a93a1a2a013b", 00:24:04.713 "strip_size_kb": 0, 00:24:04.713 "state": "online", 00:24:04.713 "raid_level": "raid1", 00:24:04.713 "superblock": true, 00:24:04.713 "num_base_bdevs": 3, 00:24:04.713 "num_base_bdevs_discovered": 3, 00:24:04.713 "num_base_bdevs_operational": 3, 00:24:04.713 "base_bdevs_list": [ 00:24:04.713 { 00:24:04.713 "name": "BaseBdev1", 00:24:04.713 "uuid": "0e9772b2-db7b-5dca-b2ce-7b961fa7541b", 00:24:04.713 "is_configured": true, 00:24:04.713 "data_offset": 2048, 00:24:04.713 "data_size": 63488 00:24:04.713 }, 00:24:04.713 { 00:24:04.713 "name": "BaseBdev2", 00:24:04.713 "uuid": "6e5ff34d-8d11-5bb0-8456-2e9969a441cd", 00:24:04.713 "is_configured": true, 00:24:04.713 "data_offset": 2048, 00:24:04.713 "data_size": 63488 00:24:04.713 }, 00:24:04.713 { 00:24:04.713 "name": "BaseBdev3", 00:24:04.713 "uuid": "fce1bc3b-e628-55e4-a726-cf37b438aa9b", 00:24:04.713 "is_configured": true, 00:24:04.713 "data_offset": 2048, 00:24:04.713 "data_size": 63488 00:24:04.713 } 00:24:04.713 ] 00:24:04.713 }' 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.713 07:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.972 07:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:04.973 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.973 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.973 [2024-11-20 07:21:29.258004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.973 [2024-11-20 07:21:29.258039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:05.231 [2024-11-20 07:21:29.261594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.231 [2024-11-20 07:21:29.261693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.231 [2024-11-20 07:21:29.261837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.231 [2024-11-20 07:21:29.261854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:05.231 { 00:24:05.231 "results": [ 00:24:05.231 { 00:24:05.231 "job": "raid_bdev1", 00:24:05.231 "core_mask": "0x1", 00:24:05.231 "workload": "randrw", 00:24:05.231 "percentage": 50, 00:24:05.231 "status": "finished", 00:24:05.231 "queue_depth": 1, 00:24:05.231 "io_size": 131072, 00:24:05.231 "runtime": 1.380452, 00:24:05.231 "iops": 8734.820189329292, 00:24:05.231 "mibps": 1091.8525236661615, 00:24:05.231 "io_failed": 0, 00:24:05.231 "io_timeout": 0, 00:24:05.231 "avg_latency_us": 110.11008971787874, 00:24:05.231 "min_latency_us": 39.56363636363636, 00:24:05.231 "max_latency_us": 1995.8690909090908 00:24:05.231 } 00:24:05.231 ], 00:24:05.231 "core_count": 1 00:24:05.231 } 00:24:05.231 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.231 07:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69375 00:24:05.231 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69375 ']' 00:24:05.231 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69375 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69375 00:24:05.232 killing process with pid 69375 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69375' 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69375 00:24:05.232 [2024-11-20 07:21:29.296543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:05.232 07:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69375 00:24:05.232 [2024-11-20 07:21:29.508829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c2ZnL9EbuK 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:06.649 00:24:06.649 real 0m4.719s 00:24:06.649 user 0m5.868s 00:24:06.649 sys 0m0.572s 00:24:06.649 ************************************ 00:24:06.649 END TEST raid_read_error_test 00:24:06.649 ************************************ 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.649 07:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.649 07:21:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:24:06.649 07:21:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:06.649 07:21:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.649 07:21:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:06.649 ************************************ 00:24:06.649 START TEST raid_write_error_test 00:24:06.649 ************************************ 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xaj5Sh6U21 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69527 00:24:06.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69527 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69527 ']' 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.649 07:21:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.649 [2024-11-20 07:21:30.790853] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:06.649 [2024-11-20 07:21:30.791046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69527 ] 00:24:06.908 [2024-11-20 07:21:30.969181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.908 [2024-11-20 07:21:31.101138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.167 [2024-11-20 07:21:31.309231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:07.167 [2024-11-20 07:21:31.309289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.735 BaseBdev1_malloc 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.735 true 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.735 [2024-11-20 07:21:31.783886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:07.735 [2024-11-20 07:21:31.783984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.735 [2024-11-20 07:21:31.784031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:07.735 [2024-11-20 07:21:31.784060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.735 [2024-11-20 07:21:31.787994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.735 [2024-11-20 07:21:31.788064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:07.735 BaseBdev1 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.735 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 BaseBdev2_malloc 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 true 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 [2024-11-20 07:21:31.852164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:07.736 [2024-11-20 07:21:31.852269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.736 [2024-11-20 07:21:31.852324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:07.736 [2024-11-20 07:21:31.852361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.736 [2024-11-20 07:21:31.855634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.736 [2024-11-20 07:21:31.855688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:07.736 BaseBdev2 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 BaseBdev3_malloc 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 true 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 [2024-11-20 07:21:31.934413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:07.736 [2024-11-20 07:21:31.934685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.736 [2024-11-20 07:21:31.934764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:07.736 [2024-11-20 07:21:31.934903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.736 [2024-11-20 07:21:31.937910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.736 [2024-11-20 07:21:31.938079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:07.736 BaseBdev3 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 [2024-11-20 07:21:31.946519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:07.736 [2024-11-20 07:21:31.949052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:07.736 [2024-11-20 07:21:31.949295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:07.736 [2024-11-20 07:21:31.949628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:07.736 [2024-11-20 07:21:31.949649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:07.736 [2024-11-20 07:21:31.950010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:24:07.736 [2024-11-20 07:21:31.950253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:07.736 [2024-11-20 07:21:31.950274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:07.736 [2024-11-20 07:21:31.950547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.736 07:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.736 07:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.736 "name": "raid_bdev1", 00:24:07.736 "uuid": "b047100b-e3c4-47dc-9469-26cc28b402ef", 00:24:07.736 "strip_size_kb": 0, 00:24:07.736 "state": "online", 00:24:07.736 "raid_level": "raid1", 00:24:07.736 "superblock": true, 00:24:07.736 "num_base_bdevs": 3, 00:24:07.736 "num_base_bdevs_discovered": 3, 00:24:07.736 "num_base_bdevs_operational": 3, 00:24:07.736 "base_bdevs_list": [ 00:24:07.736 { 00:24:07.736 "name": "BaseBdev1", 00:24:07.736 "uuid": "ed2411c0-ea63-585f-a04d-0bafa0793545", 00:24:07.736 "is_configured": true, 00:24:07.736 "data_offset": 2048, 00:24:07.736 "data_size": 63488 00:24:07.736 }, 00:24:07.736 { 00:24:07.736 "name": "BaseBdev2", 00:24:07.736 "uuid": "c2f6196b-eed7-5cc3-8621-4bb43184947c", 00:24:07.736 "is_configured": true, 00:24:07.736 "data_offset": 2048, 00:24:07.736 "data_size": 63488 00:24:07.736 }, 00:24:07.736 { 00:24:07.736 "name": "BaseBdev3", 00:24:07.736 "uuid": "bb4b057e-3183-5a0a-a478-e0dc36f0d85e", 00:24:07.736 "is_configured": true, 00:24:07.736 "data_offset": 2048, 00:24:07.736 "data_size": 63488 00:24:07.736 } 00:24:07.736 ] 00:24:07.736 }' 00:24:07.736 07:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.736 07:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.304 07:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:08.304 07:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:08.562 [2024-11-20 07:21:32.604074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.499 [2024-11-20 07:21:33.481753] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:24:09.499 [2024-11-20 07:21:33.481815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:09.499 [2024-11-20 07:21:33.482073] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.499 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.500 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:09.500 "name": "raid_bdev1", 00:24:09.500 "uuid": "b047100b-e3c4-47dc-9469-26cc28b402ef", 00:24:09.500 "strip_size_kb": 0, 00:24:09.500 "state": "online", 00:24:09.500 "raid_level": "raid1", 00:24:09.500 "superblock": true, 00:24:09.500 "num_base_bdevs": 3, 00:24:09.500 "num_base_bdevs_discovered": 2, 00:24:09.500 "num_base_bdevs_operational": 2, 00:24:09.500 "base_bdevs_list": [ 00:24:09.500 { 00:24:09.500 "name": null, 00:24:09.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.500 "is_configured": false, 00:24:09.500 "data_offset": 0, 00:24:09.500 "data_size": 63488 00:24:09.500 }, 00:24:09.500 { 00:24:09.500 "name": "BaseBdev2", 00:24:09.500 "uuid": "c2f6196b-eed7-5cc3-8621-4bb43184947c", 00:24:09.500 "is_configured": true, 00:24:09.500 "data_offset": 2048, 00:24:09.500 "data_size": 63488 00:24:09.500 }, 00:24:09.500 { 00:24:09.500 "name": "BaseBdev3", 00:24:09.500 "uuid": "bb4b057e-3183-5a0a-a478-e0dc36f0d85e", 00:24:09.500 "is_configured": true, 00:24:09.500 "data_offset": 2048, 00:24:09.500 "data_size": 63488 00:24:09.500 } 00:24:09.500 ] 00:24:09.500 }' 00:24:09.500 07:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:09.500 07:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.764 [2024-11-20 07:21:34.008187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:09.764 [2024-11-20 07:21:34.008398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:09.764 [2024-11-20 07:21:34.012677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.764 [2024-11-20 07:21:34.012973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.764 { 00:24:09.764 "results": [ 00:24:09.764 { 00:24:09.764 "job": "raid_bdev1", 00:24:09.764 "core_mask": "0x1", 00:24:09.764 "workload": "randrw", 00:24:09.764 "percentage": 50, 00:24:09.764 "status": "finished", 00:24:09.764 "queue_depth": 1, 00:24:09.764 "io_size": 131072, 00:24:09.764 "runtime": 1.401718, 00:24:09.764 "iops": 9744.4707137955, 00:24:09.764 "mibps": 1218.0588392244374, 00:24:09.764 "io_failed": 0, 00:24:09.764 "io_timeout": 0, 00:24:09.764 "avg_latency_us": 98.3270767858688, 00:24:09.764 "min_latency_us": 43.985454545454544, 00:24:09.764 "max_latency_us": 1824.581818181818 00:24:09.764 } 00:24:09.764 ], 00:24:09.764 "core_count": 1 00:24:09.764 } 00:24:09.764 [2024-11-20 07:21:34.013252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:09.764 [2024-11-20 07:21:34.013298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69527 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69527 ']' 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69527 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69527 00:24:09.764 killing process with pid 69527 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69527' 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69527 00:24:09.764 [2024-11-20 07:21:34.052474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.764 07:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69527 00:24:10.023 [2024-11-20 07:21:34.302246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xaj5Sh6U21 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:11.400 00:24:11.400 real 0m4.729s 00:24:11.400 user 0m5.818s 00:24:11.400 sys 0m0.571s 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.400 ************************************ 00:24:11.400 END TEST raid_write_error_test 00:24:11.400 ************************************ 00:24:11.400 07:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.400 07:21:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:24:11.400 07:21:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:11.400 07:21:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:24:11.400 07:21:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:11.400 07:21:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.400 07:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:11.400 ************************************ 00:24:11.400 START TEST raid_state_function_test 00:24:11.400 ************************************ 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:11.400 Process raid pid: 69671 00:24:11.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69671 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69671' 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69671 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69671 ']' 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.400 07:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.401 [2024-11-20 07:21:35.566135] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:11.401 [2024-11-20 07:21:35.566597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.659 [2024-11-20 07:21:35.749285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.660 [2024-11-20 07:21:35.882544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.918 [2024-11-20 07:21:36.089897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.918 [2024-11-20 07:21:36.090168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.486 [2024-11-20 07:21:36.582823] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:12.486 [2024-11-20 07:21:36.583035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:12.486 [2024-11-20 07:21:36.583065] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:12.486 [2024-11-20 07:21:36.583084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:12.486 [2024-11-20 07:21:36.583094] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:12.486 [2024-11-20 07:21:36.583108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:12.486 [2024-11-20 07:21:36.583117] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:12.486 [2024-11-20 07:21:36.583131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.486 "name": "Existed_Raid", 00:24:12.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.486 "strip_size_kb": 64, 00:24:12.486 "state": "configuring", 00:24:12.486 "raid_level": "raid0", 00:24:12.486 "superblock": false, 00:24:12.486 "num_base_bdevs": 4, 00:24:12.486 "num_base_bdevs_discovered": 0, 00:24:12.486 "num_base_bdevs_operational": 4, 00:24:12.486 "base_bdevs_list": [ 00:24:12.486 { 00:24:12.486 "name": "BaseBdev1", 00:24:12.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.486 "is_configured": false, 00:24:12.486 "data_offset": 0, 00:24:12.486 "data_size": 0 00:24:12.486 }, 00:24:12.486 { 00:24:12.486 "name": "BaseBdev2", 00:24:12.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.486 "is_configured": false, 00:24:12.486 "data_offset": 0, 00:24:12.486 "data_size": 0 00:24:12.486 }, 00:24:12.486 { 00:24:12.486 "name": "BaseBdev3", 00:24:12.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.486 "is_configured": false, 00:24:12.486 "data_offset": 0, 00:24:12.486 "data_size": 0 00:24:12.486 }, 00:24:12.486 { 00:24:12.486 "name": "BaseBdev4", 00:24:12.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.486 "is_configured": false, 00:24:12.486 "data_offset": 0, 00:24:12.486 "data_size": 0 00:24:12.486 } 00:24:12.486 ] 00:24:12.486 }' 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.486 07:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 [2024-11-20 07:21:37.066898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:13.054 [2024-11-20 07:21:37.066947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 [2024-11-20 07:21:37.074871] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:13.054 [2024-11-20 07:21:37.074927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:13.054 [2024-11-20 07:21:37.074943] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:13.054 [2024-11-20 07:21:37.074959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:13.054 [2024-11-20 07:21:37.074969] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:13.054 [2024-11-20 07:21:37.074982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:13.054 [2024-11-20 07:21:37.074992] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:13.054 [2024-11-20 07:21:37.075005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 [2024-11-20 07:21:37.119962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.054 BaseBdev1 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.054 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.054 [ 00:24:13.054 { 00:24:13.054 "name": "BaseBdev1", 00:24:13.054 "aliases": [ 00:24:13.054 "708421fe-63a3-431f-aabe-83fe68a01197" 00:24:13.054 ], 00:24:13.054 "product_name": "Malloc disk", 00:24:13.054 "block_size": 512, 00:24:13.054 "num_blocks": 65536, 00:24:13.054 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:13.054 "assigned_rate_limits": { 00:24:13.054 "rw_ios_per_sec": 0, 00:24:13.054 "rw_mbytes_per_sec": 0, 00:24:13.054 "r_mbytes_per_sec": 0, 00:24:13.054 "w_mbytes_per_sec": 0 00:24:13.054 }, 00:24:13.054 "claimed": true, 00:24:13.054 "claim_type": "exclusive_write", 00:24:13.054 "zoned": false, 00:24:13.054 "supported_io_types": { 00:24:13.054 "read": true, 00:24:13.054 "write": true, 00:24:13.054 "unmap": true, 00:24:13.054 "flush": true, 00:24:13.054 "reset": true, 00:24:13.054 "nvme_admin": false, 00:24:13.054 "nvme_io": false, 00:24:13.054 "nvme_io_md": false, 00:24:13.054 "write_zeroes": true, 00:24:13.054 "zcopy": true, 00:24:13.054 "get_zone_info": false, 00:24:13.054 "zone_management": false, 00:24:13.054 "zone_append": false, 00:24:13.054 "compare": false, 00:24:13.054 "compare_and_write": false, 00:24:13.054 "abort": true, 00:24:13.054 "seek_hole": false, 00:24:13.054 "seek_data": false, 00:24:13.054 "copy": true, 00:24:13.054 "nvme_iov_md": false 00:24:13.054 }, 00:24:13.054 "memory_domains": [ 00:24:13.055 { 00:24:13.055 "dma_device_id": "system", 00:24:13.055 "dma_device_type": 1 00:24:13.055 }, 00:24:13.055 { 00:24:13.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.055 "dma_device_type": 2 00:24:13.055 } 00:24:13.055 ], 00:24:13.055 "driver_specific": {} 00:24:13.055 } 00:24:13.055 ] 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.055 "name": "Existed_Raid", 00:24:13.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.055 "strip_size_kb": 64, 00:24:13.055 "state": "configuring", 00:24:13.055 "raid_level": "raid0", 00:24:13.055 "superblock": false, 00:24:13.055 "num_base_bdevs": 4, 00:24:13.055 "num_base_bdevs_discovered": 1, 00:24:13.055 "num_base_bdevs_operational": 4, 00:24:13.055 "base_bdevs_list": [ 00:24:13.055 { 00:24:13.055 "name": "BaseBdev1", 00:24:13.055 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:13.055 "is_configured": true, 00:24:13.055 "data_offset": 0, 00:24:13.055 "data_size": 65536 00:24:13.055 }, 00:24:13.055 { 00:24:13.055 "name": "BaseBdev2", 00:24:13.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.055 "is_configured": false, 00:24:13.055 "data_offset": 0, 00:24:13.055 "data_size": 0 00:24:13.055 }, 00:24:13.055 { 00:24:13.055 "name": "BaseBdev3", 00:24:13.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.055 "is_configured": false, 00:24:13.055 "data_offset": 0, 00:24:13.055 "data_size": 0 00:24:13.055 }, 00:24:13.055 { 00:24:13.055 "name": "BaseBdev4", 00:24:13.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.055 "is_configured": false, 00:24:13.055 "data_offset": 0, 00:24:13.055 "data_size": 0 00:24:13.055 } 00:24:13.055 ] 00:24:13.055 }' 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.055 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.625 [2024-11-20 07:21:37.640160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:13.625 [2024-11-20 07:21:37.640225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.625 [2024-11-20 07:21:37.648210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.625 [2024-11-20 07:21:37.650734] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:13.625 [2024-11-20 07:21:37.650919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:13.625 [2024-11-20 07:21:37.651048] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:13.625 [2024-11-20 07:21:37.651180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:13.625 [2024-11-20 07:21:37.651290] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:13.625 [2024-11-20 07:21:37.651322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.625 "name": "Existed_Raid", 00:24:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.625 "strip_size_kb": 64, 00:24:13.625 "state": "configuring", 00:24:13.625 "raid_level": "raid0", 00:24:13.625 "superblock": false, 00:24:13.625 "num_base_bdevs": 4, 00:24:13.625 "num_base_bdevs_discovered": 1, 00:24:13.625 "num_base_bdevs_operational": 4, 00:24:13.625 "base_bdevs_list": [ 00:24:13.625 { 00:24:13.625 "name": "BaseBdev1", 00:24:13.625 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:13.625 "is_configured": true, 00:24:13.625 "data_offset": 0, 00:24:13.625 "data_size": 65536 00:24:13.625 }, 00:24:13.625 { 00:24:13.625 "name": "BaseBdev2", 00:24:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.625 "is_configured": false, 00:24:13.625 "data_offset": 0, 00:24:13.625 "data_size": 0 00:24:13.625 }, 00:24:13.625 { 00:24:13.625 "name": "BaseBdev3", 00:24:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.625 "is_configured": false, 00:24:13.625 "data_offset": 0, 00:24:13.625 "data_size": 0 00:24:13.625 }, 00:24:13.625 { 00:24:13.625 "name": "BaseBdev4", 00:24:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.625 "is_configured": false, 00:24:13.625 "data_offset": 0, 00:24:13.625 "data_size": 0 00:24:13.625 } 00:24:13.625 ] 00:24:13.625 }' 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.625 07:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 [2024-11-20 07:21:38.224953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.216 BaseBdev2 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 [ 00:24:14.216 { 00:24:14.216 "name": "BaseBdev2", 00:24:14.216 "aliases": [ 00:24:14.216 "c40f6309-987c-412b-b5e8-3d984851d9ea" 00:24:14.216 ], 00:24:14.216 "product_name": "Malloc disk", 00:24:14.216 "block_size": 512, 00:24:14.216 "num_blocks": 65536, 00:24:14.216 "uuid": "c40f6309-987c-412b-b5e8-3d984851d9ea", 00:24:14.216 "assigned_rate_limits": { 00:24:14.216 "rw_ios_per_sec": 0, 00:24:14.216 "rw_mbytes_per_sec": 0, 00:24:14.216 "r_mbytes_per_sec": 0, 00:24:14.216 "w_mbytes_per_sec": 0 00:24:14.216 }, 00:24:14.216 "claimed": true, 00:24:14.216 "claim_type": "exclusive_write", 00:24:14.216 "zoned": false, 00:24:14.216 "supported_io_types": { 00:24:14.216 "read": true, 00:24:14.216 "write": true, 00:24:14.216 "unmap": true, 00:24:14.216 "flush": true, 00:24:14.216 "reset": true, 00:24:14.216 "nvme_admin": false, 00:24:14.216 "nvme_io": false, 00:24:14.216 "nvme_io_md": false, 00:24:14.216 "write_zeroes": true, 00:24:14.216 "zcopy": true, 00:24:14.216 "get_zone_info": false, 00:24:14.216 "zone_management": false, 00:24:14.216 "zone_append": false, 00:24:14.216 "compare": false, 00:24:14.216 "compare_and_write": false, 00:24:14.216 "abort": true, 00:24:14.216 "seek_hole": false, 00:24:14.216 "seek_data": false, 00:24:14.216 "copy": true, 00:24:14.216 "nvme_iov_md": false 00:24:14.216 }, 00:24:14.216 "memory_domains": [ 00:24:14.216 { 00:24:14.216 "dma_device_id": "system", 00:24:14.216 "dma_device_type": 1 00:24:14.216 }, 00:24:14.216 { 00:24:14.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.216 "dma_device_type": 2 00:24:14.216 } 00:24:14.216 ], 00:24:14.216 "driver_specific": {} 00:24:14.216 } 00:24:14.216 ] 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.216 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.216 "name": "Existed_Raid", 00:24:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.216 "strip_size_kb": 64, 00:24:14.216 "state": "configuring", 00:24:14.216 "raid_level": "raid0", 00:24:14.216 "superblock": false, 00:24:14.216 "num_base_bdevs": 4, 00:24:14.216 "num_base_bdevs_discovered": 2, 00:24:14.216 "num_base_bdevs_operational": 4, 00:24:14.216 "base_bdevs_list": [ 00:24:14.216 { 00:24:14.217 "name": "BaseBdev1", 00:24:14.217 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:14.217 "is_configured": true, 00:24:14.217 "data_offset": 0, 00:24:14.217 "data_size": 65536 00:24:14.217 }, 00:24:14.217 { 00:24:14.217 "name": "BaseBdev2", 00:24:14.217 "uuid": "c40f6309-987c-412b-b5e8-3d984851d9ea", 00:24:14.217 "is_configured": true, 00:24:14.217 "data_offset": 0, 00:24:14.217 "data_size": 65536 00:24:14.217 }, 00:24:14.217 { 00:24:14.217 "name": "BaseBdev3", 00:24:14.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.217 "is_configured": false, 00:24:14.217 "data_offset": 0, 00:24:14.217 "data_size": 0 00:24:14.217 }, 00:24:14.217 { 00:24:14.217 "name": "BaseBdev4", 00:24:14.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.217 "is_configured": false, 00:24:14.217 "data_offset": 0, 00:24:14.217 "data_size": 0 00:24:14.217 } 00:24:14.217 ] 00:24:14.217 }' 00:24:14.217 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.217 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.784 [2024-11-20 07:21:38.820891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:14.784 BaseBdev3 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.784 [ 00:24:14.784 { 00:24:14.784 "name": "BaseBdev3", 00:24:14.784 "aliases": [ 00:24:14.784 "ab9de08a-599c-49a4-8d56-773d0453450d" 00:24:14.784 ], 00:24:14.784 "product_name": "Malloc disk", 00:24:14.784 "block_size": 512, 00:24:14.784 "num_blocks": 65536, 00:24:14.784 "uuid": "ab9de08a-599c-49a4-8d56-773d0453450d", 00:24:14.784 "assigned_rate_limits": { 00:24:14.784 "rw_ios_per_sec": 0, 00:24:14.784 "rw_mbytes_per_sec": 0, 00:24:14.784 "r_mbytes_per_sec": 0, 00:24:14.784 "w_mbytes_per_sec": 0 00:24:14.784 }, 00:24:14.784 "claimed": true, 00:24:14.784 "claim_type": "exclusive_write", 00:24:14.784 "zoned": false, 00:24:14.784 "supported_io_types": { 00:24:14.784 "read": true, 00:24:14.784 "write": true, 00:24:14.784 "unmap": true, 00:24:14.784 "flush": true, 00:24:14.784 "reset": true, 00:24:14.784 "nvme_admin": false, 00:24:14.784 "nvme_io": false, 00:24:14.784 "nvme_io_md": false, 00:24:14.784 "write_zeroes": true, 00:24:14.784 "zcopy": true, 00:24:14.784 "get_zone_info": false, 00:24:14.784 "zone_management": false, 00:24:14.784 "zone_append": false, 00:24:14.784 "compare": false, 00:24:14.784 "compare_and_write": false, 00:24:14.784 "abort": true, 00:24:14.784 "seek_hole": false, 00:24:14.784 "seek_data": false, 00:24:14.784 "copy": true, 00:24:14.784 "nvme_iov_md": false 00:24:14.784 }, 00:24:14.784 "memory_domains": [ 00:24:14.784 { 00:24:14.784 "dma_device_id": "system", 00:24:14.784 "dma_device_type": 1 00:24:14.784 }, 00:24:14.784 { 00:24:14.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.784 "dma_device_type": 2 00:24:14.784 } 00:24:14.784 ], 00:24:14.784 "driver_specific": {} 00:24:14.784 } 00:24:14.784 ] 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.784 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.785 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.785 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.785 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.785 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.785 "name": "Existed_Raid", 00:24:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.785 "strip_size_kb": 64, 00:24:14.785 "state": "configuring", 00:24:14.785 "raid_level": "raid0", 00:24:14.785 "superblock": false, 00:24:14.785 "num_base_bdevs": 4, 00:24:14.785 "num_base_bdevs_discovered": 3, 00:24:14.785 "num_base_bdevs_operational": 4, 00:24:14.785 "base_bdevs_list": [ 00:24:14.785 { 00:24:14.785 "name": "BaseBdev1", 00:24:14.785 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:14.785 "is_configured": true, 00:24:14.785 "data_offset": 0, 00:24:14.785 "data_size": 65536 00:24:14.785 }, 00:24:14.785 { 00:24:14.785 "name": "BaseBdev2", 00:24:14.785 "uuid": "c40f6309-987c-412b-b5e8-3d984851d9ea", 00:24:14.785 "is_configured": true, 00:24:14.785 "data_offset": 0, 00:24:14.785 "data_size": 65536 00:24:14.785 }, 00:24:14.785 { 00:24:14.785 "name": "BaseBdev3", 00:24:14.785 "uuid": "ab9de08a-599c-49a4-8d56-773d0453450d", 00:24:14.785 "is_configured": true, 00:24:14.785 "data_offset": 0, 00:24:14.785 "data_size": 65536 00:24:14.785 }, 00:24:14.785 { 00:24:14.785 "name": "BaseBdev4", 00:24:14.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.785 "is_configured": false, 00:24:14.785 "data_offset": 0, 00:24:14.785 "data_size": 0 00:24:14.785 } 00:24:14.785 ] 00:24:14.785 }' 00:24:14.785 07:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.785 07:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.351 [2024-11-20 07:21:39.395870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:15.351 [2024-11-20 07:21:39.395939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:15.351 [2024-11-20 07:21:39.395956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:15.351 [2024-11-20 07:21:39.396287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:15.351 [2024-11-20 07:21:39.396510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:15.351 [2024-11-20 07:21:39.396534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:15.351 [2024-11-20 07:21:39.396878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.351 BaseBdev4 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.351 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.351 [ 00:24:15.351 { 00:24:15.351 "name": "BaseBdev4", 00:24:15.351 "aliases": [ 00:24:15.351 "26e2a596-4607-4eab-874f-194e7c9af7f2" 00:24:15.351 ], 00:24:15.351 "product_name": "Malloc disk", 00:24:15.351 "block_size": 512, 00:24:15.351 "num_blocks": 65536, 00:24:15.351 "uuid": "26e2a596-4607-4eab-874f-194e7c9af7f2", 00:24:15.351 "assigned_rate_limits": { 00:24:15.352 "rw_ios_per_sec": 0, 00:24:15.352 "rw_mbytes_per_sec": 0, 00:24:15.352 "r_mbytes_per_sec": 0, 00:24:15.352 "w_mbytes_per_sec": 0 00:24:15.352 }, 00:24:15.352 "claimed": true, 00:24:15.352 "claim_type": "exclusive_write", 00:24:15.352 "zoned": false, 00:24:15.352 "supported_io_types": { 00:24:15.352 "read": true, 00:24:15.352 "write": true, 00:24:15.352 "unmap": true, 00:24:15.352 "flush": true, 00:24:15.352 "reset": true, 00:24:15.352 "nvme_admin": false, 00:24:15.352 "nvme_io": false, 00:24:15.352 "nvme_io_md": false, 00:24:15.352 "write_zeroes": true, 00:24:15.352 "zcopy": true, 00:24:15.352 "get_zone_info": false, 00:24:15.352 "zone_management": false, 00:24:15.352 "zone_append": false, 00:24:15.352 "compare": false, 00:24:15.352 "compare_and_write": false, 00:24:15.352 "abort": true, 00:24:15.352 "seek_hole": false, 00:24:15.352 "seek_data": false, 00:24:15.352 "copy": true, 00:24:15.352 "nvme_iov_md": false 00:24:15.352 }, 00:24:15.352 "memory_domains": [ 00:24:15.352 { 00:24:15.352 "dma_device_id": "system", 00:24:15.352 "dma_device_type": 1 00:24:15.352 }, 00:24:15.352 { 00:24:15.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.352 "dma_device_type": 2 00:24:15.352 } 00:24:15.352 ], 00:24:15.352 "driver_specific": {} 00:24:15.352 } 00:24:15.352 ] 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.352 "name": "Existed_Raid", 00:24:15.352 "uuid": "8f5d6af2-db06-446e-a89e-73ac762451bc", 00:24:15.352 "strip_size_kb": 64, 00:24:15.352 "state": "online", 00:24:15.352 "raid_level": "raid0", 00:24:15.352 "superblock": false, 00:24:15.352 "num_base_bdevs": 4, 00:24:15.352 "num_base_bdevs_discovered": 4, 00:24:15.352 "num_base_bdevs_operational": 4, 00:24:15.352 "base_bdevs_list": [ 00:24:15.352 { 00:24:15.352 "name": "BaseBdev1", 00:24:15.352 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:15.352 "is_configured": true, 00:24:15.352 "data_offset": 0, 00:24:15.352 "data_size": 65536 00:24:15.352 }, 00:24:15.352 { 00:24:15.352 "name": "BaseBdev2", 00:24:15.352 "uuid": "c40f6309-987c-412b-b5e8-3d984851d9ea", 00:24:15.352 "is_configured": true, 00:24:15.352 "data_offset": 0, 00:24:15.352 "data_size": 65536 00:24:15.352 }, 00:24:15.352 { 00:24:15.352 "name": "BaseBdev3", 00:24:15.352 "uuid": "ab9de08a-599c-49a4-8d56-773d0453450d", 00:24:15.352 "is_configured": true, 00:24:15.352 "data_offset": 0, 00:24:15.352 "data_size": 65536 00:24:15.352 }, 00:24:15.352 { 00:24:15.352 "name": "BaseBdev4", 00:24:15.352 "uuid": "26e2a596-4607-4eab-874f-194e7c9af7f2", 00:24:15.352 "is_configured": true, 00:24:15.352 "data_offset": 0, 00:24:15.352 "data_size": 65536 00:24:15.352 } 00:24:15.352 ] 00:24:15.352 }' 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.352 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.920 [2024-11-20 07:21:39.960521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:15.920 07:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.920 "name": "Existed_Raid", 00:24:15.920 "aliases": [ 00:24:15.920 "8f5d6af2-db06-446e-a89e-73ac762451bc" 00:24:15.920 ], 00:24:15.920 "product_name": "Raid Volume", 00:24:15.920 "block_size": 512, 00:24:15.920 "num_blocks": 262144, 00:24:15.920 "uuid": "8f5d6af2-db06-446e-a89e-73ac762451bc", 00:24:15.920 "assigned_rate_limits": { 00:24:15.920 "rw_ios_per_sec": 0, 00:24:15.920 "rw_mbytes_per_sec": 0, 00:24:15.920 "r_mbytes_per_sec": 0, 00:24:15.920 "w_mbytes_per_sec": 0 00:24:15.920 }, 00:24:15.920 "claimed": false, 00:24:15.920 "zoned": false, 00:24:15.920 "supported_io_types": { 00:24:15.920 "read": true, 00:24:15.920 "write": true, 00:24:15.920 "unmap": true, 00:24:15.920 "flush": true, 00:24:15.920 "reset": true, 00:24:15.920 "nvme_admin": false, 00:24:15.920 "nvme_io": false, 00:24:15.920 "nvme_io_md": false, 00:24:15.920 "write_zeroes": true, 00:24:15.920 "zcopy": false, 00:24:15.920 "get_zone_info": false, 00:24:15.920 "zone_management": false, 00:24:15.920 "zone_append": false, 00:24:15.920 "compare": false, 00:24:15.920 "compare_and_write": false, 00:24:15.920 "abort": false, 00:24:15.920 "seek_hole": false, 00:24:15.920 "seek_data": false, 00:24:15.920 "copy": false, 00:24:15.920 "nvme_iov_md": false 00:24:15.920 }, 00:24:15.920 "memory_domains": [ 00:24:15.920 { 00:24:15.920 "dma_device_id": "system", 00:24:15.920 "dma_device_type": 1 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.920 "dma_device_type": 2 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "system", 00:24:15.920 "dma_device_type": 1 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.920 "dma_device_type": 2 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "system", 00:24:15.920 "dma_device_type": 1 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.920 "dma_device_type": 2 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "system", 00:24:15.920 "dma_device_type": 1 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.920 "dma_device_type": 2 00:24:15.920 } 00:24:15.920 ], 00:24:15.920 "driver_specific": { 00:24:15.920 "raid": { 00:24:15.920 "uuid": "8f5d6af2-db06-446e-a89e-73ac762451bc", 00:24:15.920 "strip_size_kb": 64, 00:24:15.920 "state": "online", 00:24:15.920 "raid_level": "raid0", 00:24:15.920 "superblock": false, 00:24:15.920 "num_base_bdevs": 4, 00:24:15.920 "num_base_bdevs_discovered": 4, 00:24:15.920 "num_base_bdevs_operational": 4, 00:24:15.920 "base_bdevs_list": [ 00:24:15.920 { 00:24:15.920 "name": "BaseBdev1", 00:24:15.920 "uuid": "708421fe-63a3-431f-aabe-83fe68a01197", 00:24:15.920 "is_configured": true, 00:24:15.920 "data_offset": 0, 00:24:15.920 "data_size": 65536 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "name": "BaseBdev2", 00:24:15.920 "uuid": "c40f6309-987c-412b-b5e8-3d984851d9ea", 00:24:15.920 "is_configured": true, 00:24:15.920 "data_offset": 0, 00:24:15.920 "data_size": 65536 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "name": "BaseBdev3", 00:24:15.920 "uuid": "ab9de08a-599c-49a4-8d56-773d0453450d", 00:24:15.920 "is_configured": true, 00:24:15.920 "data_offset": 0, 00:24:15.920 "data_size": 65536 00:24:15.920 }, 00:24:15.920 { 00:24:15.920 "name": "BaseBdev4", 00:24:15.920 "uuid": "26e2a596-4607-4eab-874f-194e7c9af7f2", 00:24:15.920 "is_configured": true, 00:24:15.920 "data_offset": 0, 00:24:15.920 "data_size": 65536 00:24:15.920 } 00:24:15.920 ] 00:24:15.920 } 00:24:15.920 } 00:24:15.920 }' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:15.920 BaseBdev2 00:24:15.920 BaseBdev3 00:24:15.920 BaseBdev4' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.920 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.179 [2024-11-20 07:21:40.340282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:16.179 [2024-11-20 07:21:40.340324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.179 [2024-11-20 07:21:40.340395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.179 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.438 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.438 "name": "Existed_Raid", 00:24:16.438 "uuid": "8f5d6af2-db06-446e-a89e-73ac762451bc", 00:24:16.438 "strip_size_kb": 64, 00:24:16.438 "state": "offline", 00:24:16.438 "raid_level": "raid0", 00:24:16.438 "superblock": false, 00:24:16.438 "num_base_bdevs": 4, 00:24:16.438 "num_base_bdevs_discovered": 3, 00:24:16.438 "num_base_bdevs_operational": 3, 00:24:16.438 "base_bdevs_list": [ 00:24:16.438 { 00:24:16.438 "name": null, 00:24:16.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.438 "is_configured": false, 00:24:16.438 "data_offset": 0, 00:24:16.438 "data_size": 65536 00:24:16.438 }, 00:24:16.438 { 00:24:16.438 "name": "BaseBdev2", 00:24:16.438 "uuid": "c40f6309-987c-412b-b5e8-3d984851d9ea", 00:24:16.438 "is_configured": true, 00:24:16.438 "data_offset": 0, 00:24:16.438 "data_size": 65536 00:24:16.438 }, 00:24:16.438 { 00:24:16.438 "name": "BaseBdev3", 00:24:16.438 "uuid": "ab9de08a-599c-49a4-8d56-773d0453450d", 00:24:16.438 "is_configured": true, 00:24:16.438 "data_offset": 0, 00:24:16.438 "data_size": 65536 00:24:16.438 }, 00:24:16.438 { 00:24:16.438 "name": "BaseBdev4", 00:24:16.438 "uuid": "26e2a596-4607-4eab-874f-194e7c9af7f2", 00:24:16.438 "is_configured": true, 00:24:16.438 "data_offset": 0, 00:24:16.438 "data_size": 65536 00:24:16.438 } 00:24:16.438 ] 00:24:16.438 }' 00:24:16.438 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.438 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.697 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:16.697 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:16.697 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.697 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.697 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.697 07:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:16.956 07:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.956 [2024-11-20 07:21:41.024910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.956 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.957 [2024-11-20 07:21:41.171257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.215 [2024-11-20 07:21:41.317881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:17.215 [2024-11-20 07:21:41.317949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:17.215 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:17.216 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.216 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 BaseBdev2 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.475 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 [ 00:24:17.475 { 00:24:17.475 "name": "BaseBdev2", 00:24:17.475 "aliases": [ 00:24:17.475 "1fcc8501-ebf3-45e9-af94-0458fac6e804" 00:24:17.475 ], 00:24:17.475 "product_name": "Malloc disk", 00:24:17.475 "block_size": 512, 00:24:17.475 "num_blocks": 65536, 00:24:17.475 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:17.475 "assigned_rate_limits": { 00:24:17.475 "rw_ios_per_sec": 0, 00:24:17.475 "rw_mbytes_per_sec": 0, 00:24:17.475 "r_mbytes_per_sec": 0, 00:24:17.475 "w_mbytes_per_sec": 0 00:24:17.475 }, 00:24:17.475 "claimed": false, 00:24:17.475 "zoned": false, 00:24:17.475 "supported_io_types": { 00:24:17.475 "read": true, 00:24:17.475 "write": true, 00:24:17.475 "unmap": true, 00:24:17.476 "flush": true, 00:24:17.476 "reset": true, 00:24:17.476 "nvme_admin": false, 00:24:17.476 "nvme_io": false, 00:24:17.476 "nvme_io_md": false, 00:24:17.476 "write_zeroes": true, 00:24:17.476 "zcopy": true, 00:24:17.476 "get_zone_info": false, 00:24:17.476 "zone_management": false, 00:24:17.476 "zone_append": false, 00:24:17.476 "compare": false, 00:24:17.476 "compare_and_write": false, 00:24:17.476 "abort": true, 00:24:17.476 "seek_hole": false, 00:24:17.476 "seek_data": false, 00:24:17.476 "copy": true, 00:24:17.476 "nvme_iov_md": false 00:24:17.476 }, 00:24:17.476 "memory_domains": [ 00:24:17.476 { 00:24:17.476 "dma_device_id": "system", 00:24:17.476 "dma_device_type": 1 00:24:17.476 }, 00:24:17.476 { 00:24:17.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.476 "dma_device_type": 2 00:24:17.476 } 00:24:17.476 ], 00:24:17.476 "driver_specific": {} 00:24:17.476 } 00:24:17.476 ] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 BaseBdev3 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 [ 00:24:17.476 { 00:24:17.476 "name": "BaseBdev3", 00:24:17.476 "aliases": [ 00:24:17.476 "7b2f62a9-e5ff-43df-a542-05e1284a6741" 00:24:17.476 ], 00:24:17.476 "product_name": "Malloc disk", 00:24:17.476 "block_size": 512, 00:24:17.476 "num_blocks": 65536, 00:24:17.476 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:17.476 "assigned_rate_limits": { 00:24:17.476 "rw_ios_per_sec": 0, 00:24:17.476 "rw_mbytes_per_sec": 0, 00:24:17.476 "r_mbytes_per_sec": 0, 00:24:17.476 "w_mbytes_per_sec": 0 00:24:17.476 }, 00:24:17.476 "claimed": false, 00:24:17.476 "zoned": false, 00:24:17.476 "supported_io_types": { 00:24:17.476 "read": true, 00:24:17.476 "write": true, 00:24:17.476 "unmap": true, 00:24:17.476 "flush": true, 00:24:17.476 "reset": true, 00:24:17.476 "nvme_admin": false, 00:24:17.476 "nvme_io": false, 00:24:17.476 "nvme_io_md": false, 00:24:17.476 "write_zeroes": true, 00:24:17.476 "zcopy": true, 00:24:17.476 "get_zone_info": false, 00:24:17.476 "zone_management": false, 00:24:17.476 "zone_append": false, 00:24:17.476 "compare": false, 00:24:17.476 "compare_and_write": false, 00:24:17.476 "abort": true, 00:24:17.476 "seek_hole": false, 00:24:17.476 "seek_data": false, 00:24:17.476 "copy": true, 00:24:17.476 "nvme_iov_md": false 00:24:17.476 }, 00:24:17.476 "memory_domains": [ 00:24:17.476 { 00:24:17.476 "dma_device_id": "system", 00:24:17.476 "dma_device_type": 1 00:24:17.476 }, 00:24:17.476 { 00:24:17.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.476 "dma_device_type": 2 00:24:17.476 } 00:24:17.476 ], 00:24:17.476 "driver_specific": {} 00:24:17.476 } 00:24:17.476 ] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 BaseBdev4 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.476 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.476 [ 00:24:17.476 { 00:24:17.476 "name": "BaseBdev4", 00:24:17.476 "aliases": [ 00:24:17.476 "bf65f870-0606-4265-b2d8-bc0fc065a820" 00:24:17.476 ], 00:24:17.476 "product_name": "Malloc disk", 00:24:17.476 "block_size": 512, 00:24:17.476 "num_blocks": 65536, 00:24:17.476 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:17.476 "assigned_rate_limits": { 00:24:17.476 "rw_ios_per_sec": 0, 00:24:17.476 "rw_mbytes_per_sec": 0, 00:24:17.476 "r_mbytes_per_sec": 0, 00:24:17.476 "w_mbytes_per_sec": 0 00:24:17.476 }, 00:24:17.476 "claimed": false, 00:24:17.476 "zoned": false, 00:24:17.476 "supported_io_types": { 00:24:17.476 "read": true, 00:24:17.476 "write": true, 00:24:17.476 "unmap": true, 00:24:17.476 "flush": true, 00:24:17.476 "reset": true, 00:24:17.476 "nvme_admin": false, 00:24:17.476 "nvme_io": false, 00:24:17.476 "nvme_io_md": false, 00:24:17.476 "write_zeroes": true, 00:24:17.476 "zcopy": true, 00:24:17.476 "get_zone_info": false, 00:24:17.476 "zone_management": false, 00:24:17.476 "zone_append": false, 00:24:17.476 "compare": false, 00:24:17.477 "compare_and_write": false, 00:24:17.477 "abort": true, 00:24:17.477 "seek_hole": false, 00:24:17.477 "seek_data": false, 00:24:17.477 "copy": true, 00:24:17.477 "nvme_iov_md": false 00:24:17.477 }, 00:24:17.477 "memory_domains": [ 00:24:17.477 { 00:24:17.477 "dma_device_id": "system", 00:24:17.477 "dma_device_type": 1 00:24:17.477 }, 00:24:17.477 { 00:24:17.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.477 "dma_device_type": 2 00:24:17.477 } 00:24:17.477 ], 00:24:17.477 "driver_specific": {} 00:24:17.477 } 00:24:17.477 ] 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.477 [2024-11-20 07:21:41.686373] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:17.477 [2024-11-20 07:21:41.686558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:17.477 [2024-11-20 07:21:41.686627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:17.477 [2024-11-20 07:21:41.689121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:17.477 [2024-11-20 07:21:41.689194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:17.477 "name": "Existed_Raid", 00:24:17.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.477 "strip_size_kb": 64, 00:24:17.477 "state": "configuring", 00:24:17.477 "raid_level": "raid0", 00:24:17.477 "superblock": false, 00:24:17.477 "num_base_bdevs": 4, 00:24:17.477 "num_base_bdevs_discovered": 3, 00:24:17.477 "num_base_bdevs_operational": 4, 00:24:17.477 "base_bdevs_list": [ 00:24:17.477 { 00:24:17.477 "name": "BaseBdev1", 00:24:17.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.477 "is_configured": false, 00:24:17.477 "data_offset": 0, 00:24:17.477 "data_size": 0 00:24:17.477 }, 00:24:17.477 { 00:24:17.477 "name": "BaseBdev2", 00:24:17.477 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:17.477 "is_configured": true, 00:24:17.477 "data_offset": 0, 00:24:17.477 "data_size": 65536 00:24:17.477 }, 00:24:17.477 { 00:24:17.477 "name": "BaseBdev3", 00:24:17.477 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:17.477 "is_configured": true, 00:24:17.477 "data_offset": 0, 00:24:17.477 "data_size": 65536 00:24:17.477 }, 00:24:17.477 { 00:24:17.477 "name": "BaseBdev4", 00:24:17.477 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:17.477 "is_configured": true, 00:24:17.477 "data_offset": 0, 00:24:17.477 "data_size": 65536 00:24:17.477 } 00:24:17.477 ] 00:24:17.477 }' 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:17.477 07:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.045 [2024-11-20 07:21:42.222548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.045 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.045 "name": "Existed_Raid", 00:24:18.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.045 "strip_size_kb": 64, 00:24:18.045 "state": "configuring", 00:24:18.045 "raid_level": "raid0", 00:24:18.045 "superblock": false, 00:24:18.045 "num_base_bdevs": 4, 00:24:18.045 "num_base_bdevs_discovered": 2, 00:24:18.045 "num_base_bdevs_operational": 4, 00:24:18.045 "base_bdevs_list": [ 00:24:18.045 { 00:24:18.045 "name": "BaseBdev1", 00:24:18.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.045 "is_configured": false, 00:24:18.045 "data_offset": 0, 00:24:18.045 "data_size": 0 00:24:18.045 }, 00:24:18.045 { 00:24:18.045 "name": null, 00:24:18.045 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:18.045 "is_configured": false, 00:24:18.045 "data_offset": 0, 00:24:18.045 "data_size": 65536 00:24:18.045 }, 00:24:18.045 { 00:24:18.046 "name": "BaseBdev3", 00:24:18.046 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:18.046 "is_configured": true, 00:24:18.046 "data_offset": 0, 00:24:18.046 "data_size": 65536 00:24:18.046 }, 00:24:18.046 { 00:24:18.046 "name": "BaseBdev4", 00:24:18.046 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:18.046 "is_configured": true, 00:24:18.046 "data_offset": 0, 00:24:18.046 "data_size": 65536 00:24:18.046 } 00:24:18.046 ] 00:24:18.046 }' 00:24:18.046 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.046 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 [2024-11-20 07:21:42.824748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:18.613 BaseBdev1 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 [ 00:24:18.613 { 00:24:18.613 "name": "BaseBdev1", 00:24:18.613 "aliases": [ 00:24:18.613 "30c2e4db-2e99-4d08-b4dd-bc197c693f88" 00:24:18.613 ], 00:24:18.613 "product_name": "Malloc disk", 00:24:18.613 "block_size": 512, 00:24:18.613 "num_blocks": 65536, 00:24:18.613 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:18.613 "assigned_rate_limits": { 00:24:18.613 "rw_ios_per_sec": 0, 00:24:18.613 "rw_mbytes_per_sec": 0, 00:24:18.613 "r_mbytes_per_sec": 0, 00:24:18.613 "w_mbytes_per_sec": 0 00:24:18.613 }, 00:24:18.613 "claimed": true, 00:24:18.613 "claim_type": "exclusive_write", 00:24:18.613 "zoned": false, 00:24:18.613 "supported_io_types": { 00:24:18.614 "read": true, 00:24:18.614 "write": true, 00:24:18.614 "unmap": true, 00:24:18.614 "flush": true, 00:24:18.614 "reset": true, 00:24:18.614 "nvme_admin": false, 00:24:18.614 "nvme_io": false, 00:24:18.614 "nvme_io_md": false, 00:24:18.614 "write_zeroes": true, 00:24:18.614 "zcopy": true, 00:24:18.614 "get_zone_info": false, 00:24:18.614 "zone_management": false, 00:24:18.614 "zone_append": false, 00:24:18.614 "compare": false, 00:24:18.614 "compare_and_write": false, 00:24:18.614 "abort": true, 00:24:18.614 "seek_hole": false, 00:24:18.614 "seek_data": false, 00:24:18.614 "copy": true, 00:24:18.614 "nvme_iov_md": false 00:24:18.614 }, 00:24:18.614 "memory_domains": [ 00:24:18.614 { 00:24:18.614 "dma_device_id": "system", 00:24:18.614 "dma_device_type": 1 00:24:18.614 }, 00:24:18.614 { 00:24:18.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.614 "dma_device_type": 2 00:24:18.614 } 00:24:18.614 ], 00:24:18.614 "driver_specific": {} 00:24:18.614 } 00:24:18.614 ] 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.614 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.873 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.873 "name": "Existed_Raid", 00:24:18.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.873 "strip_size_kb": 64, 00:24:18.873 "state": "configuring", 00:24:18.873 "raid_level": "raid0", 00:24:18.873 "superblock": false, 00:24:18.873 "num_base_bdevs": 4, 00:24:18.873 "num_base_bdevs_discovered": 3, 00:24:18.873 "num_base_bdevs_operational": 4, 00:24:18.873 "base_bdevs_list": [ 00:24:18.873 { 00:24:18.873 "name": "BaseBdev1", 00:24:18.873 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:18.873 "is_configured": true, 00:24:18.873 "data_offset": 0, 00:24:18.873 "data_size": 65536 00:24:18.873 }, 00:24:18.873 { 00:24:18.873 "name": null, 00:24:18.873 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:18.873 "is_configured": false, 00:24:18.873 "data_offset": 0, 00:24:18.873 "data_size": 65536 00:24:18.873 }, 00:24:18.873 { 00:24:18.873 "name": "BaseBdev3", 00:24:18.873 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:18.873 "is_configured": true, 00:24:18.873 "data_offset": 0, 00:24:18.873 "data_size": 65536 00:24:18.873 }, 00:24:18.873 { 00:24:18.873 "name": "BaseBdev4", 00:24:18.873 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:18.873 "is_configured": true, 00:24:18.873 "data_offset": 0, 00:24:18.873 "data_size": 65536 00:24:18.873 } 00:24:18.873 ] 00:24:18.873 }' 00:24:18.873 07:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.873 07:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.132 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:19.132 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.132 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.132 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.391 [2024-11-20 07:21:43.477048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.391 "name": "Existed_Raid", 00:24:19.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.391 "strip_size_kb": 64, 00:24:19.391 "state": "configuring", 00:24:19.391 "raid_level": "raid0", 00:24:19.391 "superblock": false, 00:24:19.391 "num_base_bdevs": 4, 00:24:19.391 "num_base_bdevs_discovered": 2, 00:24:19.391 "num_base_bdevs_operational": 4, 00:24:19.391 "base_bdevs_list": [ 00:24:19.391 { 00:24:19.391 "name": "BaseBdev1", 00:24:19.391 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:19.391 "is_configured": true, 00:24:19.391 "data_offset": 0, 00:24:19.391 "data_size": 65536 00:24:19.391 }, 00:24:19.391 { 00:24:19.391 "name": null, 00:24:19.391 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:19.391 "is_configured": false, 00:24:19.391 "data_offset": 0, 00:24:19.391 "data_size": 65536 00:24:19.391 }, 00:24:19.391 { 00:24:19.391 "name": null, 00:24:19.391 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:19.391 "is_configured": false, 00:24:19.391 "data_offset": 0, 00:24:19.391 "data_size": 65536 00:24:19.391 }, 00:24:19.391 { 00:24:19.391 "name": "BaseBdev4", 00:24:19.391 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:19.391 "is_configured": true, 00:24:19.391 "data_offset": 0, 00:24:19.391 "data_size": 65536 00:24:19.391 } 00:24:19.391 ] 00:24:19.391 }' 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.391 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.958 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.958 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.958 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.958 07:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:19.958 07:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.958 [2024-11-20 07:21:44.017180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.958 "name": "Existed_Raid", 00:24:19.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.958 "strip_size_kb": 64, 00:24:19.958 "state": "configuring", 00:24:19.958 "raid_level": "raid0", 00:24:19.958 "superblock": false, 00:24:19.958 "num_base_bdevs": 4, 00:24:19.958 "num_base_bdevs_discovered": 3, 00:24:19.958 "num_base_bdevs_operational": 4, 00:24:19.958 "base_bdevs_list": [ 00:24:19.958 { 00:24:19.958 "name": "BaseBdev1", 00:24:19.958 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:19.958 "is_configured": true, 00:24:19.958 "data_offset": 0, 00:24:19.958 "data_size": 65536 00:24:19.958 }, 00:24:19.958 { 00:24:19.958 "name": null, 00:24:19.958 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:19.958 "is_configured": false, 00:24:19.958 "data_offset": 0, 00:24:19.958 "data_size": 65536 00:24:19.958 }, 00:24:19.958 { 00:24:19.958 "name": "BaseBdev3", 00:24:19.958 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:19.958 "is_configured": true, 00:24:19.958 "data_offset": 0, 00:24:19.958 "data_size": 65536 00:24:19.958 }, 00:24:19.958 { 00:24:19.958 "name": "BaseBdev4", 00:24:19.958 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:19.958 "is_configured": true, 00:24:19.958 "data_offset": 0, 00:24:19.958 "data_size": 65536 00:24:19.958 } 00:24:19.958 ] 00:24:19.958 }' 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.958 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.526 [2024-11-20 07:21:44.565365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.526 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.526 "name": "Existed_Raid", 00:24:20.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.526 "strip_size_kb": 64, 00:24:20.526 "state": "configuring", 00:24:20.526 "raid_level": "raid0", 00:24:20.526 "superblock": false, 00:24:20.526 "num_base_bdevs": 4, 00:24:20.526 "num_base_bdevs_discovered": 2, 00:24:20.526 "num_base_bdevs_operational": 4, 00:24:20.526 "base_bdevs_list": [ 00:24:20.526 { 00:24:20.526 "name": null, 00:24:20.526 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:20.526 "is_configured": false, 00:24:20.526 "data_offset": 0, 00:24:20.526 "data_size": 65536 00:24:20.526 }, 00:24:20.526 { 00:24:20.526 "name": null, 00:24:20.526 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:20.526 "is_configured": false, 00:24:20.526 "data_offset": 0, 00:24:20.526 "data_size": 65536 00:24:20.526 }, 00:24:20.526 { 00:24:20.526 "name": "BaseBdev3", 00:24:20.526 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:20.526 "is_configured": true, 00:24:20.527 "data_offset": 0, 00:24:20.527 "data_size": 65536 00:24:20.527 }, 00:24:20.527 { 00:24:20.527 "name": "BaseBdev4", 00:24:20.527 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:20.527 "is_configured": true, 00:24:20.527 "data_offset": 0, 00:24:20.527 "data_size": 65536 00:24:20.527 } 00:24:20.527 ] 00:24:20.527 }' 00:24:20.527 07:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.527 07:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.093 [2024-11-20 07:21:45.260268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.093 "name": "Existed_Raid", 00:24:21.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.093 "strip_size_kb": 64, 00:24:21.093 "state": "configuring", 00:24:21.093 "raid_level": "raid0", 00:24:21.093 "superblock": false, 00:24:21.093 "num_base_bdevs": 4, 00:24:21.093 "num_base_bdevs_discovered": 3, 00:24:21.093 "num_base_bdevs_operational": 4, 00:24:21.093 "base_bdevs_list": [ 00:24:21.093 { 00:24:21.093 "name": null, 00:24:21.093 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:21.093 "is_configured": false, 00:24:21.093 "data_offset": 0, 00:24:21.093 "data_size": 65536 00:24:21.093 }, 00:24:21.093 { 00:24:21.093 "name": "BaseBdev2", 00:24:21.093 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:21.093 "is_configured": true, 00:24:21.093 "data_offset": 0, 00:24:21.093 "data_size": 65536 00:24:21.093 }, 00:24:21.093 { 00:24:21.093 "name": "BaseBdev3", 00:24:21.093 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:21.093 "is_configured": true, 00:24:21.093 "data_offset": 0, 00:24:21.093 "data_size": 65536 00:24:21.093 }, 00:24:21.093 { 00:24:21.093 "name": "BaseBdev4", 00:24:21.093 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:21.093 "is_configured": true, 00:24:21.093 "data_offset": 0, 00:24:21.093 "data_size": 65536 00:24:21.093 } 00:24:21.093 ] 00:24:21.093 }' 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.093 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 30c2e4db-2e99-4d08-b4dd-bc197c693f88 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.662 [2024-11-20 07:21:45.910538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:21.662 NewBaseBdev 00:24:21.662 [2024-11-20 07:21:45.910871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:21.662 [2024-11-20 07:21:45.910897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:21.662 [2024-11-20 07:21:45.911248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:21.662 [2024-11-20 07:21:45.911441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:21.662 [2024-11-20 07:21:45.911467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:21.662 [2024-11-20 07:21:45.911787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.662 [ 00:24:21.662 { 00:24:21.662 "name": "NewBaseBdev", 00:24:21.662 "aliases": [ 00:24:21.662 "30c2e4db-2e99-4d08-b4dd-bc197c693f88" 00:24:21.662 ], 00:24:21.662 "product_name": "Malloc disk", 00:24:21.662 "block_size": 512, 00:24:21.662 "num_blocks": 65536, 00:24:21.662 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:21.662 "assigned_rate_limits": { 00:24:21.662 "rw_ios_per_sec": 0, 00:24:21.662 "rw_mbytes_per_sec": 0, 00:24:21.662 "r_mbytes_per_sec": 0, 00:24:21.662 "w_mbytes_per_sec": 0 00:24:21.662 }, 00:24:21.662 "claimed": true, 00:24:21.662 "claim_type": "exclusive_write", 00:24:21.662 "zoned": false, 00:24:21.662 "supported_io_types": { 00:24:21.662 "read": true, 00:24:21.662 "write": true, 00:24:21.662 "unmap": true, 00:24:21.662 "flush": true, 00:24:21.662 "reset": true, 00:24:21.662 "nvme_admin": false, 00:24:21.662 "nvme_io": false, 00:24:21.662 "nvme_io_md": false, 00:24:21.662 "write_zeroes": true, 00:24:21.662 "zcopy": true, 00:24:21.662 "get_zone_info": false, 00:24:21.662 "zone_management": false, 00:24:21.662 "zone_append": false, 00:24:21.662 "compare": false, 00:24:21.662 "compare_and_write": false, 00:24:21.662 "abort": true, 00:24:21.662 "seek_hole": false, 00:24:21.662 "seek_data": false, 00:24:21.662 "copy": true, 00:24:21.662 "nvme_iov_md": false 00:24:21.662 }, 00:24:21.662 "memory_domains": [ 00:24:21.662 { 00:24:21.662 "dma_device_id": "system", 00:24:21.662 "dma_device_type": 1 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.662 "dma_device_type": 2 00:24:21.662 } 00:24:21.662 ], 00:24:21.662 "driver_specific": {} 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:21.662 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.663 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.920 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.920 07:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.920 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.921 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.921 07:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.921 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.921 "name": "Existed_Raid", 00:24:21.921 "uuid": "835e6bf4-37f9-4f82-927e-3545f98cc762", 00:24:21.921 "strip_size_kb": 64, 00:24:21.921 "state": "online", 00:24:21.921 "raid_level": "raid0", 00:24:21.921 "superblock": false, 00:24:21.921 "num_base_bdevs": 4, 00:24:21.921 "num_base_bdevs_discovered": 4, 00:24:21.921 "num_base_bdevs_operational": 4, 00:24:21.921 "base_bdevs_list": [ 00:24:21.921 { 00:24:21.921 "name": "NewBaseBdev", 00:24:21.921 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:21.921 "is_configured": true, 00:24:21.921 "data_offset": 0, 00:24:21.921 "data_size": 65536 00:24:21.921 }, 00:24:21.921 { 00:24:21.921 "name": "BaseBdev2", 00:24:21.921 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:21.921 "is_configured": true, 00:24:21.921 "data_offset": 0, 00:24:21.921 "data_size": 65536 00:24:21.921 }, 00:24:21.921 { 00:24:21.921 "name": "BaseBdev3", 00:24:21.921 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:21.921 "is_configured": true, 00:24:21.921 "data_offset": 0, 00:24:21.921 "data_size": 65536 00:24:21.921 }, 00:24:21.921 { 00:24:21.921 "name": "BaseBdev4", 00:24:21.921 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:21.921 "is_configured": true, 00:24:21.921 "data_offset": 0, 00:24:21.921 "data_size": 65536 00:24:21.921 } 00:24:21.921 ] 00:24:21.921 }' 00:24:21.921 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.921 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.179 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.179 [2024-11-20 07:21:46.455243] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:22.438 "name": "Existed_Raid", 00:24:22.438 "aliases": [ 00:24:22.438 "835e6bf4-37f9-4f82-927e-3545f98cc762" 00:24:22.438 ], 00:24:22.438 "product_name": "Raid Volume", 00:24:22.438 "block_size": 512, 00:24:22.438 "num_blocks": 262144, 00:24:22.438 "uuid": "835e6bf4-37f9-4f82-927e-3545f98cc762", 00:24:22.438 "assigned_rate_limits": { 00:24:22.438 "rw_ios_per_sec": 0, 00:24:22.438 "rw_mbytes_per_sec": 0, 00:24:22.438 "r_mbytes_per_sec": 0, 00:24:22.438 "w_mbytes_per_sec": 0 00:24:22.438 }, 00:24:22.438 "claimed": false, 00:24:22.438 "zoned": false, 00:24:22.438 "supported_io_types": { 00:24:22.438 "read": true, 00:24:22.438 "write": true, 00:24:22.438 "unmap": true, 00:24:22.438 "flush": true, 00:24:22.438 "reset": true, 00:24:22.438 "nvme_admin": false, 00:24:22.438 "nvme_io": false, 00:24:22.438 "nvme_io_md": false, 00:24:22.438 "write_zeroes": true, 00:24:22.438 "zcopy": false, 00:24:22.438 "get_zone_info": false, 00:24:22.438 "zone_management": false, 00:24:22.438 "zone_append": false, 00:24:22.438 "compare": false, 00:24:22.438 "compare_and_write": false, 00:24:22.438 "abort": false, 00:24:22.438 "seek_hole": false, 00:24:22.438 "seek_data": false, 00:24:22.438 "copy": false, 00:24:22.438 "nvme_iov_md": false 00:24:22.438 }, 00:24:22.438 "memory_domains": [ 00:24:22.438 { 00:24:22.438 "dma_device_id": "system", 00:24:22.438 "dma_device_type": 1 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.438 "dma_device_type": 2 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "system", 00:24:22.438 "dma_device_type": 1 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.438 "dma_device_type": 2 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "system", 00:24:22.438 "dma_device_type": 1 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.438 "dma_device_type": 2 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "system", 00:24:22.438 "dma_device_type": 1 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.438 "dma_device_type": 2 00:24:22.438 } 00:24:22.438 ], 00:24:22.438 "driver_specific": { 00:24:22.438 "raid": { 00:24:22.438 "uuid": "835e6bf4-37f9-4f82-927e-3545f98cc762", 00:24:22.438 "strip_size_kb": 64, 00:24:22.438 "state": "online", 00:24:22.438 "raid_level": "raid0", 00:24:22.438 "superblock": false, 00:24:22.438 "num_base_bdevs": 4, 00:24:22.438 "num_base_bdevs_discovered": 4, 00:24:22.438 "num_base_bdevs_operational": 4, 00:24:22.438 "base_bdevs_list": [ 00:24:22.438 { 00:24:22.438 "name": "NewBaseBdev", 00:24:22.438 "uuid": "30c2e4db-2e99-4d08-b4dd-bc197c693f88", 00:24:22.438 "is_configured": true, 00:24:22.438 "data_offset": 0, 00:24:22.438 "data_size": 65536 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "name": "BaseBdev2", 00:24:22.438 "uuid": "1fcc8501-ebf3-45e9-af94-0458fac6e804", 00:24:22.438 "is_configured": true, 00:24:22.438 "data_offset": 0, 00:24:22.438 "data_size": 65536 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "name": "BaseBdev3", 00:24:22.438 "uuid": "7b2f62a9-e5ff-43df-a542-05e1284a6741", 00:24:22.438 "is_configured": true, 00:24:22.438 "data_offset": 0, 00:24:22.438 "data_size": 65536 00:24:22.438 }, 00:24:22.438 { 00:24:22.438 "name": "BaseBdev4", 00:24:22.438 "uuid": "bf65f870-0606-4265-b2d8-bc0fc065a820", 00:24:22.438 "is_configured": true, 00:24:22.438 "data_offset": 0, 00:24:22.438 "data_size": 65536 00:24:22.438 } 00:24:22.438 ] 00:24:22.438 } 00:24:22.438 } 00:24:22.438 }' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:22.438 BaseBdev2 00:24:22.438 BaseBdev3 00:24:22.438 BaseBdev4' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.438 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.696 [2024-11-20 07:21:46.834901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.696 [2024-11-20 07:21:46.835103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:22.696 [2024-11-20 07:21:46.835320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.696 [2024-11-20 07:21:46.835423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:22.696 [2024-11-20 07:21:46.835441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69671 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69671 ']' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69671 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69671 00:24:22.696 killing process with pid 69671 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69671' 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69671 00:24:22.696 [2024-11-20 07:21:46.871351] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:22.696 07:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69671 00:24:23.259 [2024-11-20 07:21:47.331520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:24.197 00:24:24.197 real 0m12.924s 00:24:24.197 user 0m21.370s 00:24:24.197 sys 0m1.732s 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.197 ************************************ 00:24:24.197 END TEST raid_state_function_test 00:24:24.197 ************************************ 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.197 07:21:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:24:24.197 07:21:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:24.197 07:21:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.197 07:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:24.197 ************************************ 00:24:24.197 START TEST raid_state_function_test_sb 00:24:24.197 ************************************ 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:24.197 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70353 00:24:24.198 Process raid pid: 70353 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70353' 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70353 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70353 ']' 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.198 07:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.455 [2024-11-20 07:21:48.523304] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:24.455 [2024-11-20 07:21:48.523454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.455 [2024-11-20 07:21:48.702235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.712 [2024-11-20 07:21:48.834887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.969 [2024-11-20 07:21:49.042577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.969 [2024-11-20 07:21:49.042668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.226 [2024-11-20 07:21:49.502318] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.226 [2024-11-20 07:21:49.502400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.226 [2024-11-20 07:21:49.502419] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.226 [2024-11-20 07:21:49.502436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.226 [2024-11-20 07:21:49.502446] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:25.226 [2024-11-20 07:21:49.502460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:25.226 [2024-11-20 07:21:49.502470] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:25.226 [2024-11-20 07:21:49.502484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.226 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.483 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.483 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.483 "name": "Existed_Raid", 00:24:25.483 "uuid": "bd30dee7-f2d1-4c14-8c36-fd0786507f9f", 00:24:25.483 "strip_size_kb": 64, 00:24:25.483 "state": "configuring", 00:24:25.483 "raid_level": "raid0", 00:24:25.483 "superblock": true, 00:24:25.483 "num_base_bdevs": 4, 00:24:25.483 "num_base_bdevs_discovered": 0, 00:24:25.483 "num_base_bdevs_operational": 4, 00:24:25.483 "base_bdevs_list": [ 00:24:25.483 { 00:24:25.483 "name": "BaseBdev1", 00:24:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.483 "is_configured": false, 00:24:25.483 "data_offset": 0, 00:24:25.483 "data_size": 0 00:24:25.483 }, 00:24:25.483 { 00:24:25.483 "name": "BaseBdev2", 00:24:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.483 "is_configured": false, 00:24:25.483 "data_offset": 0, 00:24:25.483 "data_size": 0 00:24:25.483 }, 00:24:25.483 { 00:24:25.483 "name": "BaseBdev3", 00:24:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.483 "is_configured": false, 00:24:25.483 "data_offset": 0, 00:24:25.483 "data_size": 0 00:24:25.483 }, 00:24:25.483 { 00:24:25.483 "name": "BaseBdev4", 00:24:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.483 "is_configured": false, 00:24:25.483 "data_offset": 0, 00:24:25.483 "data_size": 0 00:24:25.483 } 00:24:25.483 ] 00:24:25.483 }' 00:24:25.483 07:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.483 07:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.741 [2024-11-20 07:21:50.022448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:25.741 [2024-11-20 07:21:50.022525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.741 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.999 [2024-11-20 07:21:50.030437] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.999 [2024-11-20 07:21:50.030501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.999 [2024-11-20 07:21:50.030518] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.999 [2024-11-20 07:21:50.030535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.999 [2024-11-20 07:21:50.030545] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:25.999 [2024-11-20 07:21:50.030560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:25.999 [2024-11-20 07:21:50.030569] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:25.999 [2024-11-20 07:21:50.030603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.999 [2024-11-20 07:21:50.076449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.999 BaseBdev1 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.999 [ 00:24:25.999 { 00:24:25.999 "name": "BaseBdev1", 00:24:25.999 "aliases": [ 00:24:25.999 "f64047f6-c9e8-4747-8a25-7d6788cf21b0" 00:24:25.999 ], 00:24:25.999 "product_name": "Malloc disk", 00:24:25.999 "block_size": 512, 00:24:25.999 "num_blocks": 65536, 00:24:25.999 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:25.999 "assigned_rate_limits": { 00:24:25.999 "rw_ios_per_sec": 0, 00:24:25.999 "rw_mbytes_per_sec": 0, 00:24:25.999 "r_mbytes_per_sec": 0, 00:24:25.999 "w_mbytes_per_sec": 0 00:24:25.999 }, 00:24:25.999 "claimed": true, 00:24:25.999 "claim_type": "exclusive_write", 00:24:25.999 "zoned": false, 00:24:25.999 "supported_io_types": { 00:24:25.999 "read": true, 00:24:25.999 "write": true, 00:24:25.999 "unmap": true, 00:24:25.999 "flush": true, 00:24:25.999 "reset": true, 00:24:25.999 "nvme_admin": false, 00:24:25.999 "nvme_io": false, 00:24:25.999 "nvme_io_md": false, 00:24:25.999 "write_zeroes": true, 00:24:25.999 "zcopy": true, 00:24:25.999 "get_zone_info": false, 00:24:25.999 "zone_management": false, 00:24:25.999 "zone_append": false, 00:24:25.999 "compare": false, 00:24:25.999 "compare_and_write": false, 00:24:25.999 "abort": true, 00:24:25.999 "seek_hole": false, 00:24:25.999 "seek_data": false, 00:24:25.999 "copy": true, 00:24:25.999 "nvme_iov_md": false 00:24:25.999 }, 00:24:25.999 "memory_domains": [ 00:24:25.999 { 00:24:25.999 "dma_device_id": "system", 00:24:25.999 "dma_device_type": 1 00:24:25.999 }, 00:24:25.999 { 00:24:25.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.999 "dma_device_type": 2 00:24:25.999 } 00:24:25.999 ], 00:24:25.999 "driver_specific": {} 00:24:25.999 } 00:24:25.999 ] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.999 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.999 "name": "Existed_Raid", 00:24:25.999 "uuid": "c1283011-3a18-47ca-98e0-717d260de4bb", 00:24:25.999 "strip_size_kb": 64, 00:24:25.999 "state": "configuring", 00:24:25.999 "raid_level": "raid0", 00:24:25.999 "superblock": true, 00:24:25.999 "num_base_bdevs": 4, 00:24:25.999 "num_base_bdevs_discovered": 1, 00:24:25.999 "num_base_bdevs_operational": 4, 00:24:25.999 "base_bdevs_list": [ 00:24:25.999 { 00:24:25.999 "name": "BaseBdev1", 00:24:25.999 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:25.999 "is_configured": true, 00:24:25.999 "data_offset": 2048, 00:24:26.000 "data_size": 63488 00:24:26.000 }, 00:24:26.000 { 00:24:26.000 "name": "BaseBdev2", 00:24:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.000 "is_configured": false, 00:24:26.000 "data_offset": 0, 00:24:26.000 "data_size": 0 00:24:26.000 }, 00:24:26.000 { 00:24:26.000 "name": "BaseBdev3", 00:24:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.000 "is_configured": false, 00:24:26.000 "data_offset": 0, 00:24:26.000 "data_size": 0 00:24:26.000 }, 00:24:26.000 { 00:24:26.000 "name": "BaseBdev4", 00:24:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.000 "is_configured": false, 00:24:26.000 "data_offset": 0, 00:24:26.000 "data_size": 0 00:24:26.000 } 00:24:26.000 ] 00:24:26.000 }' 00:24:26.000 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.000 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.565 [2024-11-20 07:21:50.660644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:26.565 [2024-11-20 07:21:50.660723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.565 [2024-11-20 07:21:50.668730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:26.565 [2024-11-20 07:21:50.671204] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.565 [2024-11-20 07:21:50.671266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.565 [2024-11-20 07:21:50.671284] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:26.565 [2024-11-20 07:21:50.671302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:26.565 [2024-11-20 07:21:50.671313] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:26.565 [2024-11-20 07:21:50.671326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.565 "name": "Existed_Raid", 00:24:26.565 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:26.565 "strip_size_kb": 64, 00:24:26.565 "state": "configuring", 00:24:26.565 "raid_level": "raid0", 00:24:26.565 "superblock": true, 00:24:26.565 "num_base_bdevs": 4, 00:24:26.565 "num_base_bdevs_discovered": 1, 00:24:26.565 "num_base_bdevs_operational": 4, 00:24:26.565 "base_bdevs_list": [ 00:24:26.565 { 00:24:26.565 "name": "BaseBdev1", 00:24:26.565 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:26.565 "is_configured": true, 00:24:26.565 "data_offset": 2048, 00:24:26.565 "data_size": 63488 00:24:26.565 }, 00:24:26.565 { 00:24:26.565 "name": "BaseBdev2", 00:24:26.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.565 "is_configured": false, 00:24:26.565 "data_offset": 0, 00:24:26.565 "data_size": 0 00:24:26.565 }, 00:24:26.565 { 00:24:26.565 "name": "BaseBdev3", 00:24:26.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.565 "is_configured": false, 00:24:26.565 "data_offset": 0, 00:24:26.565 "data_size": 0 00:24:26.565 }, 00:24:26.565 { 00:24:26.565 "name": "BaseBdev4", 00:24:26.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.565 "is_configured": false, 00:24:26.565 "data_offset": 0, 00:24:26.565 "data_size": 0 00:24:26.565 } 00:24:26.565 ] 00:24:26.565 }' 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.565 07:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.131 [2024-11-20 07:21:51.232027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:27.131 BaseBdev2 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.131 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.131 [ 00:24:27.131 { 00:24:27.131 "name": "BaseBdev2", 00:24:27.131 "aliases": [ 00:24:27.131 "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb" 00:24:27.131 ], 00:24:27.131 "product_name": "Malloc disk", 00:24:27.131 "block_size": 512, 00:24:27.131 "num_blocks": 65536, 00:24:27.131 "uuid": "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb", 00:24:27.131 "assigned_rate_limits": { 00:24:27.131 "rw_ios_per_sec": 0, 00:24:27.131 "rw_mbytes_per_sec": 0, 00:24:27.131 "r_mbytes_per_sec": 0, 00:24:27.131 "w_mbytes_per_sec": 0 00:24:27.131 }, 00:24:27.131 "claimed": true, 00:24:27.131 "claim_type": "exclusive_write", 00:24:27.131 "zoned": false, 00:24:27.131 "supported_io_types": { 00:24:27.131 "read": true, 00:24:27.131 "write": true, 00:24:27.131 "unmap": true, 00:24:27.131 "flush": true, 00:24:27.131 "reset": true, 00:24:27.131 "nvme_admin": false, 00:24:27.131 "nvme_io": false, 00:24:27.131 "nvme_io_md": false, 00:24:27.131 "write_zeroes": true, 00:24:27.131 "zcopy": true, 00:24:27.131 "get_zone_info": false, 00:24:27.131 "zone_management": false, 00:24:27.131 "zone_append": false, 00:24:27.131 "compare": false, 00:24:27.131 "compare_and_write": false, 00:24:27.131 "abort": true, 00:24:27.131 "seek_hole": false, 00:24:27.131 "seek_data": false, 00:24:27.131 "copy": true, 00:24:27.131 "nvme_iov_md": false 00:24:27.131 }, 00:24:27.131 "memory_domains": [ 00:24:27.131 { 00:24:27.131 "dma_device_id": "system", 00:24:27.131 "dma_device_type": 1 00:24:27.131 }, 00:24:27.132 { 00:24:27.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.132 "dma_device_type": 2 00:24:27.132 } 00:24:27.132 ], 00:24:27.132 "driver_specific": {} 00:24:27.132 } 00:24:27.132 ] 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.132 "name": "Existed_Raid", 00:24:27.132 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:27.132 "strip_size_kb": 64, 00:24:27.132 "state": "configuring", 00:24:27.132 "raid_level": "raid0", 00:24:27.132 "superblock": true, 00:24:27.132 "num_base_bdevs": 4, 00:24:27.132 "num_base_bdevs_discovered": 2, 00:24:27.132 "num_base_bdevs_operational": 4, 00:24:27.132 "base_bdevs_list": [ 00:24:27.132 { 00:24:27.132 "name": "BaseBdev1", 00:24:27.132 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:27.132 "is_configured": true, 00:24:27.132 "data_offset": 2048, 00:24:27.132 "data_size": 63488 00:24:27.132 }, 00:24:27.132 { 00:24:27.132 "name": "BaseBdev2", 00:24:27.132 "uuid": "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb", 00:24:27.132 "is_configured": true, 00:24:27.132 "data_offset": 2048, 00:24:27.132 "data_size": 63488 00:24:27.132 }, 00:24:27.132 { 00:24:27.132 "name": "BaseBdev3", 00:24:27.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.132 "is_configured": false, 00:24:27.132 "data_offset": 0, 00:24:27.132 "data_size": 0 00:24:27.132 }, 00:24:27.132 { 00:24:27.132 "name": "BaseBdev4", 00:24:27.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.132 "is_configured": false, 00:24:27.132 "data_offset": 0, 00:24:27.132 "data_size": 0 00:24:27.132 } 00:24:27.132 ] 00:24:27.132 }' 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.132 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.698 [2024-11-20 07:21:51.864061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:27.698 BaseBdev3 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.698 [ 00:24:27.698 { 00:24:27.698 "name": "BaseBdev3", 00:24:27.698 "aliases": [ 00:24:27.698 "6a8e1071-f656-4dc7-b065-cc903ddda94c" 00:24:27.698 ], 00:24:27.698 "product_name": "Malloc disk", 00:24:27.698 "block_size": 512, 00:24:27.698 "num_blocks": 65536, 00:24:27.698 "uuid": "6a8e1071-f656-4dc7-b065-cc903ddda94c", 00:24:27.698 "assigned_rate_limits": { 00:24:27.698 "rw_ios_per_sec": 0, 00:24:27.698 "rw_mbytes_per_sec": 0, 00:24:27.698 "r_mbytes_per_sec": 0, 00:24:27.698 "w_mbytes_per_sec": 0 00:24:27.698 }, 00:24:27.698 "claimed": true, 00:24:27.698 "claim_type": "exclusive_write", 00:24:27.698 "zoned": false, 00:24:27.698 "supported_io_types": { 00:24:27.698 "read": true, 00:24:27.698 "write": true, 00:24:27.698 "unmap": true, 00:24:27.698 "flush": true, 00:24:27.698 "reset": true, 00:24:27.698 "nvme_admin": false, 00:24:27.698 "nvme_io": false, 00:24:27.698 "nvme_io_md": false, 00:24:27.698 "write_zeroes": true, 00:24:27.698 "zcopy": true, 00:24:27.698 "get_zone_info": false, 00:24:27.698 "zone_management": false, 00:24:27.698 "zone_append": false, 00:24:27.698 "compare": false, 00:24:27.698 "compare_and_write": false, 00:24:27.698 "abort": true, 00:24:27.698 "seek_hole": false, 00:24:27.698 "seek_data": false, 00:24:27.698 "copy": true, 00:24:27.698 "nvme_iov_md": false 00:24:27.698 }, 00:24:27.698 "memory_domains": [ 00:24:27.698 { 00:24:27.698 "dma_device_id": "system", 00:24:27.698 "dma_device_type": 1 00:24:27.698 }, 00:24:27.698 { 00:24:27.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.698 "dma_device_type": 2 00:24:27.698 } 00:24:27.698 ], 00:24:27.698 "driver_specific": {} 00:24:27.698 } 00:24:27.698 ] 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.698 "name": "Existed_Raid", 00:24:27.698 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:27.698 "strip_size_kb": 64, 00:24:27.698 "state": "configuring", 00:24:27.698 "raid_level": "raid0", 00:24:27.698 "superblock": true, 00:24:27.698 "num_base_bdevs": 4, 00:24:27.698 "num_base_bdevs_discovered": 3, 00:24:27.698 "num_base_bdevs_operational": 4, 00:24:27.698 "base_bdevs_list": [ 00:24:27.698 { 00:24:27.698 "name": "BaseBdev1", 00:24:27.698 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:27.698 "is_configured": true, 00:24:27.698 "data_offset": 2048, 00:24:27.698 "data_size": 63488 00:24:27.698 }, 00:24:27.698 { 00:24:27.698 "name": "BaseBdev2", 00:24:27.698 "uuid": "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb", 00:24:27.698 "is_configured": true, 00:24:27.698 "data_offset": 2048, 00:24:27.698 "data_size": 63488 00:24:27.698 }, 00:24:27.698 { 00:24:27.698 "name": "BaseBdev3", 00:24:27.698 "uuid": "6a8e1071-f656-4dc7-b065-cc903ddda94c", 00:24:27.698 "is_configured": true, 00:24:27.698 "data_offset": 2048, 00:24:27.698 "data_size": 63488 00:24:27.698 }, 00:24:27.698 { 00:24:27.698 "name": "BaseBdev4", 00:24:27.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.698 "is_configured": false, 00:24:27.698 "data_offset": 0, 00:24:27.698 "data_size": 0 00:24:27.698 } 00:24:27.698 ] 00:24:27.698 }' 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.698 07:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.265 [2024-11-20 07:21:52.426793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:28.265 [2024-11-20 07:21:52.427433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:28.265 [2024-11-20 07:21:52.427460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:28.265 [2024-11-20 07:21:52.427869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:28.265 [2024-11-20 07:21:52.428106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:28.265 [2024-11-20 07:21:52.428131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:28.265 BaseBdev4 00:24:28.265 [2024-11-20 07:21:52.428311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.265 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.265 [ 00:24:28.265 { 00:24:28.265 "name": "BaseBdev4", 00:24:28.265 "aliases": [ 00:24:28.265 "bde78173-e5c4-4b98-be28-2adf8fa67bd9" 00:24:28.265 ], 00:24:28.265 "product_name": "Malloc disk", 00:24:28.265 "block_size": 512, 00:24:28.265 "num_blocks": 65536, 00:24:28.265 "uuid": "bde78173-e5c4-4b98-be28-2adf8fa67bd9", 00:24:28.265 "assigned_rate_limits": { 00:24:28.265 "rw_ios_per_sec": 0, 00:24:28.265 "rw_mbytes_per_sec": 0, 00:24:28.265 "r_mbytes_per_sec": 0, 00:24:28.265 "w_mbytes_per_sec": 0 00:24:28.265 }, 00:24:28.265 "claimed": true, 00:24:28.265 "claim_type": "exclusive_write", 00:24:28.265 "zoned": false, 00:24:28.265 "supported_io_types": { 00:24:28.266 "read": true, 00:24:28.266 "write": true, 00:24:28.266 "unmap": true, 00:24:28.266 "flush": true, 00:24:28.266 "reset": true, 00:24:28.266 "nvme_admin": false, 00:24:28.266 "nvme_io": false, 00:24:28.266 "nvme_io_md": false, 00:24:28.266 "write_zeroes": true, 00:24:28.266 "zcopy": true, 00:24:28.266 "get_zone_info": false, 00:24:28.266 "zone_management": false, 00:24:28.266 "zone_append": false, 00:24:28.266 "compare": false, 00:24:28.266 "compare_and_write": false, 00:24:28.266 "abort": true, 00:24:28.266 "seek_hole": false, 00:24:28.266 "seek_data": false, 00:24:28.266 "copy": true, 00:24:28.266 "nvme_iov_md": false 00:24:28.266 }, 00:24:28.266 "memory_domains": [ 00:24:28.266 { 00:24:28.266 "dma_device_id": "system", 00:24:28.266 "dma_device_type": 1 00:24:28.266 }, 00:24:28.266 { 00:24:28.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.266 "dma_device_type": 2 00:24:28.266 } 00:24:28.266 ], 00:24:28.266 "driver_specific": {} 00:24:28.266 } 00:24:28.266 ] 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.266 "name": "Existed_Raid", 00:24:28.266 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:28.266 "strip_size_kb": 64, 00:24:28.266 "state": "online", 00:24:28.266 "raid_level": "raid0", 00:24:28.266 "superblock": true, 00:24:28.266 "num_base_bdevs": 4, 00:24:28.266 "num_base_bdevs_discovered": 4, 00:24:28.266 "num_base_bdevs_operational": 4, 00:24:28.266 "base_bdevs_list": [ 00:24:28.266 { 00:24:28.266 "name": "BaseBdev1", 00:24:28.266 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:28.266 "is_configured": true, 00:24:28.266 "data_offset": 2048, 00:24:28.266 "data_size": 63488 00:24:28.266 }, 00:24:28.266 { 00:24:28.266 "name": "BaseBdev2", 00:24:28.266 "uuid": "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb", 00:24:28.266 "is_configured": true, 00:24:28.266 "data_offset": 2048, 00:24:28.266 "data_size": 63488 00:24:28.266 }, 00:24:28.266 { 00:24:28.266 "name": "BaseBdev3", 00:24:28.266 "uuid": "6a8e1071-f656-4dc7-b065-cc903ddda94c", 00:24:28.266 "is_configured": true, 00:24:28.266 "data_offset": 2048, 00:24:28.266 "data_size": 63488 00:24:28.266 }, 00:24:28.266 { 00:24:28.266 "name": "BaseBdev4", 00:24:28.266 "uuid": "bde78173-e5c4-4b98-be28-2adf8fa67bd9", 00:24:28.266 "is_configured": true, 00:24:28.266 "data_offset": 2048, 00:24:28.266 "data_size": 63488 00:24:28.266 } 00:24:28.266 ] 00:24:28.266 }' 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.266 07:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:28.831 [2024-11-20 07:21:53.019467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.831 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.831 "name": "Existed_Raid", 00:24:28.831 "aliases": [ 00:24:28.831 "272fd8d8-d003-475c-8bfc-e1b35b3310a2" 00:24:28.831 ], 00:24:28.831 "product_name": "Raid Volume", 00:24:28.831 "block_size": 512, 00:24:28.831 "num_blocks": 253952, 00:24:28.831 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:28.831 "assigned_rate_limits": { 00:24:28.831 "rw_ios_per_sec": 0, 00:24:28.831 "rw_mbytes_per_sec": 0, 00:24:28.831 "r_mbytes_per_sec": 0, 00:24:28.831 "w_mbytes_per_sec": 0 00:24:28.831 }, 00:24:28.831 "claimed": false, 00:24:28.831 "zoned": false, 00:24:28.831 "supported_io_types": { 00:24:28.831 "read": true, 00:24:28.831 "write": true, 00:24:28.831 "unmap": true, 00:24:28.831 "flush": true, 00:24:28.831 "reset": true, 00:24:28.831 "nvme_admin": false, 00:24:28.831 "nvme_io": false, 00:24:28.831 "nvme_io_md": false, 00:24:28.831 "write_zeroes": true, 00:24:28.831 "zcopy": false, 00:24:28.831 "get_zone_info": false, 00:24:28.832 "zone_management": false, 00:24:28.832 "zone_append": false, 00:24:28.832 "compare": false, 00:24:28.832 "compare_and_write": false, 00:24:28.832 "abort": false, 00:24:28.832 "seek_hole": false, 00:24:28.832 "seek_data": false, 00:24:28.832 "copy": false, 00:24:28.832 "nvme_iov_md": false 00:24:28.832 }, 00:24:28.832 "memory_domains": [ 00:24:28.832 { 00:24:28.832 "dma_device_id": "system", 00:24:28.832 "dma_device_type": 1 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.832 "dma_device_type": 2 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "system", 00:24:28.832 "dma_device_type": 1 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.832 "dma_device_type": 2 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "system", 00:24:28.832 "dma_device_type": 1 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.832 "dma_device_type": 2 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "system", 00:24:28.832 "dma_device_type": 1 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.832 "dma_device_type": 2 00:24:28.832 } 00:24:28.832 ], 00:24:28.832 "driver_specific": { 00:24:28.832 "raid": { 00:24:28.832 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:28.832 "strip_size_kb": 64, 00:24:28.832 "state": "online", 00:24:28.832 "raid_level": "raid0", 00:24:28.832 "superblock": true, 00:24:28.832 "num_base_bdevs": 4, 00:24:28.832 "num_base_bdevs_discovered": 4, 00:24:28.832 "num_base_bdevs_operational": 4, 00:24:28.832 "base_bdevs_list": [ 00:24:28.832 { 00:24:28.832 "name": "BaseBdev1", 00:24:28.832 "uuid": "f64047f6-c9e8-4747-8a25-7d6788cf21b0", 00:24:28.832 "is_configured": true, 00:24:28.832 "data_offset": 2048, 00:24:28.832 "data_size": 63488 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "name": "BaseBdev2", 00:24:28.832 "uuid": "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb", 00:24:28.832 "is_configured": true, 00:24:28.832 "data_offset": 2048, 00:24:28.832 "data_size": 63488 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "name": "BaseBdev3", 00:24:28.832 "uuid": "6a8e1071-f656-4dc7-b065-cc903ddda94c", 00:24:28.832 "is_configured": true, 00:24:28.832 "data_offset": 2048, 00:24:28.832 "data_size": 63488 00:24:28.832 }, 00:24:28.832 { 00:24:28.832 "name": "BaseBdev4", 00:24:28.832 "uuid": "bde78173-e5c4-4b98-be28-2adf8fa67bd9", 00:24:28.832 "is_configured": true, 00:24:28.832 "data_offset": 2048, 00:24:28.832 "data_size": 63488 00:24:28.832 } 00:24:28.832 ] 00:24:28.832 } 00:24:28.832 } 00:24:28.832 }' 00:24:28.832 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:28.832 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:28.832 BaseBdev2 00:24:28.832 BaseBdev3 00:24:28.832 BaseBdev4' 00:24:28.832 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.090 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.348 [2024-11-20 07:21:53.403283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:29.348 [2024-11-20 07:21:53.403327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.348 [2024-11-20 07:21:53.403403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.348 "name": "Existed_Raid", 00:24:29.348 "uuid": "272fd8d8-d003-475c-8bfc-e1b35b3310a2", 00:24:29.348 "strip_size_kb": 64, 00:24:29.348 "state": "offline", 00:24:29.348 "raid_level": "raid0", 00:24:29.348 "superblock": true, 00:24:29.348 "num_base_bdevs": 4, 00:24:29.348 "num_base_bdevs_discovered": 3, 00:24:29.348 "num_base_bdevs_operational": 3, 00:24:29.348 "base_bdevs_list": [ 00:24:29.348 { 00:24:29.348 "name": null, 00:24:29.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.348 "is_configured": false, 00:24:29.348 "data_offset": 0, 00:24:29.348 "data_size": 63488 00:24:29.348 }, 00:24:29.348 { 00:24:29.348 "name": "BaseBdev2", 00:24:29.348 "uuid": "b0026d5f-aaca-4f40-90bb-3b20d3cb6abb", 00:24:29.348 "is_configured": true, 00:24:29.348 "data_offset": 2048, 00:24:29.348 "data_size": 63488 00:24:29.348 }, 00:24:29.348 { 00:24:29.348 "name": "BaseBdev3", 00:24:29.348 "uuid": "6a8e1071-f656-4dc7-b065-cc903ddda94c", 00:24:29.348 "is_configured": true, 00:24:29.348 "data_offset": 2048, 00:24:29.348 "data_size": 63488 00:24:29.348 }, 00:24:29.348 { 00:24:29.348 "name": "BaseBdev4", 00:24:29.348 "uuid": "bde78173-e5c4-4b98-be28-2adf8fa67bd9", 00:24:29.348 "is_configured": true, 00:24:29.348 "data_offset": 2048, 00:24:29.348 "data_size": 63488 00:24:29.348 } 00:24:29.348 ] 00:24:29.348 }' 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.348 07:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.914 [2024-11-20 07:21:54.070092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:29.914 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.171 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:30.171 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:30.171 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:30.171 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.171 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.172 [2024-11-20 07:21:54.213350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.172 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.172 [2024-11-20 07:21:54.376784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:30.172 [2024-11-20 07:21:54.376849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 BaseBdev2 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 [ 00:24:30.430 { 00:24:30.430 "name": "BaseBdev2", 00:24:30.430 "aliases": [ 00:24:30.430 "b96b9a2e-18f4-4da9-85b7-64d91bc5a450" 00:24:30.430 ], 00:24:30.430 "product_name": "Malloc disk", 00:24:30.430 "block_size": 512, 00:24:30.430 "num_blocks": 65536, 00:24:30.430 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:30.430 "assigned_rate_limits": { 00:24:30.430 "rw_ios_per_sec": 0, 00:24:30.430 "rw_mbytes_per_sec": 0, 00:24:30.430 "r_mbytes_per_sec": 0, 00:24:30.430 "w_mbytes_per_sec": 0 00:24:30.430 }, 00:24:30.430 "claimed": false, 00:24:30.430 "zoned": false, 00:24:30.430 "supported_io_types": { 00:24:30.430 "read": true, 00:24:30.430 "write": true, 00:24:30.430 "unmap": true, 00:24:30.430 "flush": true, 00:24:30.430 "reset": true, 00:24:30.430 "nvme_admin": false, 00:24:30.430 "nvme_io": false, 00:24:30.430 "nvme_io_md": false, 00:24:30.430 "write_zeroes": true, 00:24:30.430 "zcopy": true, 00:24:30.430 "get_zone_info": false, 00:24:30.430 "zone_management": false, 00:24:30.430 "zone_append": false, 00:24:30.430 "compare": false, 00:24:30.430 "compare_and_write": false, 00:24:30.430 "abort": true, 00:24:30.430 "seek_hole": false, 00:24:30.430 "seek_data": false, 00:24:30.430 "copy": true, 00:24:30.430 "nvme_iov_md": false 00:24:30.430 }, 00:24:30.430 "memory_domains": [ 00:24:30.430 { 00:24:30.430 "dma_device_id": "system", 00:24:30.430 "dma_device_type": 1 00:24:30.430 }, 00:24:30.430 { 00:24:30.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.430 "dma_device_type": 2 00:24:30.430 } 00:24:30.430 ], 00:24:30.430 "driver_specific": {} 00:24:30.430 } 00:24:30.430 ] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 BaseBdev3 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.430 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.430 [ 00:24:30.430 { 00:24:30.430 "name": "BaseBdev3", 00:24:30.430 "aliases": [ 00:24:30.430 "759b3420-6cc0-4583-a212-bbbfc1592ecc" 00:24:30.430 ], 00:24:30.430 "product_name": "Malloc disk", 00:24:30.430 "block_size": 512, 00:24:30.430 "num_blocks": 65536, 00:24:30.430 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:30.430 "assigned_rate_limits": { 00:24:30.430 "rw_ios_per_sec": 0, 00:24:30.430 "rw_mbytes_per_sec": 0, 00:24:30.430 "r_mbytes_per_sec": 0, 00:24:30.430 "w_mbytes_per_sec": 0 00:24:30.430 }, 00:24:30.430 "claimed": false, 00:24:30.430 "zoned": false, 00:24:30.430 "supported_io_types": { 00:24:30.430 "read": true, 00:24:30.430 "write": true, 00:24:30.430 "unmap": true, 00:24:30.430 "flush": true, 00:24:30.430 "reset": true, 00:24:30.430 "nvme_admin": false, 00:24:30.430 "nvme_io": false, 00:24:30.430 "nvme_io_md": false, 00:24:30.430 "write_zeroes": true, 00:24:30.430 "zcopy": true, 00:24:30.430 "get_zone_info": false, 00:24:30.430 "zone_management": false, 00:24:30.430 "zone_append": false, 00:24:30.430 "compare": false, 00:24:30.430 "compare_and_write": false, 00:24:30.430 "abort": true, 00:24:30.430 "seek_hole": false, 00:24:30.430 "seek_data": false, 00:24:30.430 "copy": true, 00:24:30.430 "nvme_iov_md": false 00:24:30.430 }, 00:24:30.430 "memory_domains": [ 00:24:30.430 { 00:24:30.431 "dma_device_id": "system", 00:24:30.431 "dma_device_type": 1 00:24:30.431 }, 00:24:30.431 { 00:24:30.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.431 "dma_device_type": 2 00:24:30.431 } 00:24:30.431 ], 00:24:30.431 "driver_specific": {} 00:24:30.431 } 00:24:30.431 ] 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.431 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.689 BaseBdev4 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.689 [ 00:24:30.689 { 00:24:30.689 "name": "BaseBdev4", 00:24:30.689 "aliases": [ 00:24:30.689 "927684db-fb04-4007-910c-37c07a33cdb8" 00:24:30.689 ], 00:24:30.689 "product_name": "Malloc disk", 00:24:30.689 "block_size": 512, 00:24:30.689 "num_blocks": 65536, 00:24:30.689 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:30.689 "assigned_rate_limits": { 00:24:30.689 "rw_ios_per_sec": 0, 00:24:30.689 "rw_mbytes_per_sec": 0, 00:24:30.689 "r_mbytes_per_sec": 0, 00:24:30.689 "w_mbytes_per_sec": 0 00:24:30.689 }, 00:24:30.689 "claimed": false, 00:24:30.689 "zoned": false, 00:24:30.689 "supported_io_types": { 00:24:30.689 "read": true, 00:24:30.689 "write": true, 00:24:30.689 "unmap": true, 00:24:30.689 "flush": true, 00:24:30.689 "reset": true, 00:24:30.689 "nvme_admin": false, 00:24:30.689 "nvme_io": false, 00:24:30.689 "nvme_io_md": false, 00:24:30.689 "write_zeroes": true, 00:24:30.689 "zcopy": true, 00:24:30.689 "get_zone_info": false, 00:24:30.689 "zone_management": false, 00:24:30.689 "zone_append": false, 00:24:30.689 "compare": false, 00:24:30.689 "compare_and_write": false, 00:24:30.689 "abort": true, 00:24:30.689 "seek_hole": false, 00:24:30.689 "seek_data": false, 00:24:30.689 "copy": true, 00:24:30.689 "nvme_iov_md": false 00:24:30.689 }, 00:24:30.689 "memory_domains": [ 00:24:30.689 { 00:24:30.689 "dma_device_id": "system", 00:24:30.689 "dma_device_type": 1 00:24:30.689 }, 00:24:30.689 { 00:24:30.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.689 "dma_device_type": 2 00:24:30.689 } 00:24:30.689 ], 00:24:30.689 "driver_specific": {} 00:24:30.689 } 00:24:30.689 ] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.689 [2024-11-20 07:21:54.765957] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:30.689 [2024-11-20 07:21:54.766017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:30.689 [2024-11-20 07:21:54.766057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.689 [2024-11-20 07:21:54.768618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:30.689 [2024-11-20 07:21:54.768694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.689 "name": "Existed_Raid", 00:24:30.689 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:30.689 "strip_size_kb": 64, 00:24:30.689 "state": "configuring", 00:24:30.689 "raid_level": "raid0", 00:24:30.689 "superblock": true, 00:24:30.689 "num_base_bdevs": 4, 00:24:30.689 "num_base_bdevs_discovered": 3, 00:24:30.689 "num_base_bdevs_operational": 4, 00:24:30.689 "base_bdevs_list": [ 00:24:30.689 { 00:24:30.689 "name": "BaseBdev1", 00:24:30.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.689 "is_configured": false, 00:24:30.689 "data_offset": 0, 00:24:30.689 "data_size": 0 00:24:30.689 }, 00:24:30.689 { 00:24:30.689 "name": "BaseBdev2", 00:24:30.689 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:30.689 "is_configured": true, 00:24:30.689 "data_offset": 2048, 00:24:30.689 "data_size": 63488 00:24:30.689 }, 00:24:30.689 { 00:24:30.689 "name": "BaseBdev3", 00:24:30.689 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:30.689 "is_configured": true, 00:24:30.689 "data_offset": 2048, 00:24:30.689 "data_size": 63488 00:24:30.689 }, 00:24:30.689 { 00:24:30.689 "name": "BaseBdev4", 00:24:30.689 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:30.689 "is_configured": true, 00:24:30.689 "data_offset": 2048, 00:24:30.689 "data_size": 63488 00:24:30.689 } 00:24:30.689 ] 00:24:30.689 }' 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.689 07:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.256 [2024-11-20 07:21:55.342095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.256 "name": "Existed_Raid", 00:24:31.256 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:31.256 "strip_size_kb": 64, 00:24:31.256 "state": "configuring", 00:24:31.256 "raid_level": "raid0", 00:24:31.256 "superblock": true, 00:24:31.256 "num_base_bdevs": 4, 00:24:31.256 "num_base_bdevs_discovered": 2, 00:24:31.256 "num_base_bdevs_operational": 4, 00:24:31.256 "base_bdevs_list": [ 00:24:31.256 { 00:24:31.256 "name": "BaseBdev1", 00:24:31.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.256 "is_configured": false, 00:24:31.256 "data_offset": 0, 00:24:31.256 "data_size": 0 00:24:31.256 }, 00:24:31.256 { 00:24:31.256 "name": null, 00:24:31.256 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:31.256 "is_configured": false, 00:24:31.256 "data_offset": 0, 00:24:31.256 "data_size": 63488 00:24:31.256 }, 00:24:31.256 { 00:24:31.256 "name": "BaseBdev3", 00:24:31.256 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:31.256 "is_configured": true, 00:24:31.256 "data_offset": 2048, 00:24:31.256 "data_size": 63488 00:24:31.256 }, 00:24:31.256 { 00:24:31.256 "name": "BaseBdev4", 00:24:31.256 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:31.256 "is_configured": true, 00:24:31.256 "data_offset": 2048, 00:24:31.256 "data_size": 63488 00:24:31.256 } 00:24:31.256 ] 00:24:31.256 }' 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.256 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.822 [2024-11-20 07:21:55.960248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:31.822 BaseBdev1 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.822 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.822 [ 00:24:31.822 { 00:24:31.822 "name": "BaseBdev1", 00:24:31.822 "aliases": [ 00:24:31.822 "abe4b433-1759-434c-9b29-f99830a59e1e" 00:24:31.822 ], 00:24:31.822 "product_name": "Malloc disk", 00:24:31.822 "block_size": 512, 00:24:31.822 "num_blocks": 65536, 00:24:31.822 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:31.822 "assigned_rate_limits": { 00:24:31.822 "rw_ios_per_sec": 0, 00:24:31.822 "rw_mbytes_per_sec": 0, 00:24:31.822 "r_mbytes_per_sec": 0, 00:24:31.822 "w_mbytes_per_sec": 0 00:24:31.822 }, 00:24:31.822 "claimed": true, 00:24:31.822 "claim_type": "exclusive_write", 00:24:31.822 "zoned": false, 00:24:31.822 "supported_io_types": { 00:24:31.822 "read": true, 00:24:31.822 "write": true, 00:24:31.822 "unmap": true, 00:24:31.822 "flush": true, 00:24:31.822 "reset": true, 00:24:31.822 "nvme_admin": false, 00:24:31.822 "nvme_io": false, 00:24:31.822 "nvme_io_md": false, 00:24:31.822 "write_zeroes": true, 00:24:31.822 "zcopy": true, 00:24:31.823 "get_zone_info": false, 00:24:31.823 "zone_management": false, 00:24:31.823 "zone_append": false, 00:24:31.823 "compare": false, 00:24:31.823 "compare_and_write": false, 00:24:31.823 "abort": true, 00:24:31.823 "seek_hole": false, 00:24:31.823 "seek_data": false, 00:24:31.823 "copy": true, 00:24:31.823 "nvme_iov_md": false 00:24:31.823 }, 00:24:31.823 "memory_domains": [ 00:24:31.823 { 00:24:31.823 "dma_device_id": "system", 00:24:31.823 "dma_device_type": 1 00:24:31.823 }, 00:24:31.823 { 00:24:31.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.823 "dma_device_type": 2 00:24:31.823 } 00:24:31.823 ], 00:24:31.823 "driver_specific": {} 00:24:31.823 } 00:24:31.823 ] 00:24:31.823 07:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.823 "name": "Existed_Raid", 00:24:31.823 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:31.823 "strip_size_kb": 64, 00:24:31.823 "state": "configuring", 00:24:31.823 "raid_level": "raid0", 00:24:31.823 "superblock": true, 00:24:31.823 "num_base_bdevs": 4, 00:24:31.823 "num_base_bdevs_discovered": 3, 00:24:31.823 "num_base_bdevs_operational": 4, 00:24:31.823 "base_bdevs_list": [ 00:24:31.823 { 00:24:31.823 "name": "BaseBdev1", 00:24:31.823 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:31.823 "is_configured": true, 00:24:31.823 "data_offset": 2048, 00:24:31.823 "data_size": 63488 00:24:31.823 }, 00:24:31.823 { 00:24:31.823 "name": null, 00:24:31.823 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:31.823 "is_configured": false, 00:24:31.823 "data_offset": 0, 00:24:31.823 "data_size": 63488 00:24:31.823 }, 00:24:31.823 { 00:24:31.823 "name": "BaseBdev3", 00:24:31.823 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:31.823 "is_configured": true, 00:24:31.823 "data_offset": 2048, 00:24:31.823 "data_size": 63488 00:24:31.823 }, 00:24:31.823 { 00:24:31.823 "name": "BaseBdev4", 00:24:31.823 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:31.823 "is_configured": true, 00:24:31.823 "data_offset": 2048, 00:24:31.823 "data_size": 63488 00:24:31.823 } 00:24:31.823 ] 00:24:31.823 }' 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.823 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.390 [2024-11-20 07:21:56.496504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.390 "name": "Existed_Raid", 00:24:32.390 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:32.390 "strip_size_kb": 64, 00:24:32.390 "state": "configuring", 00:24:32.390 "raid_level": "raid0", 00:24:32.390 "superblock": true, 00:24:32.390 "num_base_bdevs": 4, 00:24:32.390 "num_base_bdevs_discovered": 2, 00:24:32.390 "num_base_bdevs_operational": 4, 00:24:32.390 "base_bdevs_list": [ 00:24:32.390 { 00:24:32.390 "name": "BaseBdev1", 00:24:32.390 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:32.390 "is_configured": true, 00:24:32.390 "data_offset": 2048, 00:24:32.390 "data_size": 63488 00:24:32.390 }, 00:24:32.390 { 00:24:32.390 "name": null, 00:24:32.390 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:32.390 "is_configured": false, 00:24:32.390 "data_offset": 0, 00:24:32.390 "data_size": 63488 00:24:32.390 }, 00:24:32.390 { 00:24:32.390 "name": null, 00:24:32.390 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:32.390 "is_configured": false, 00:24:32.390 "data_offset": 0, 00:24:32.390 "data_size": 63488 00:24:32.390 }, 00:24:32.390 { 00:24:32.390 "name": "BaseBdev4", 00:24:32.390 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:32.390 "is_configured": true, 00:24:32.390 "data_offset": 2048, 00:24:32.390 "data_size": 63488 00:24:32.390 } 00:24:32.390 ] 00:24:32.390 }' 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.390 07:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.961 [2024-11-20 07:21:57.084679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.961 "name": "Existed_Raid", 00:24:32.961 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:32.961 "strip_size_kb": 64, 00:24:32.961 "state": "configuring", 00:24:32.961 "raid_level": "raid0", 00:24:32.961 "superblock": true, 00:24:32.961 "num_base_bdevs": 4, 00:24:32.961 "num_base_bdevs_discovered": 3, 00:24:32.961 "num_base_bdevs_operational": 4, 00:24:32.961 "base_bdevs_list": [ 00:24:32.961 { 00:24:32.961 "name": "BaseBdev1", 00:24:32.961 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:32.961 "is_configured": true, 00:24:32.961 "data_offset": 2048, 00:24:32.961 "data_size": 63488 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "name": null, 00:24:32.961 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:32.961 "is_configured": false, 00:24:32.961 "data_offset": 0, 00:24:32.961 "data_size": 63488 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "name": "BaseBdev3", 00:24:32.961 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:32.961 "is_configured": true, 00:24:32.961 "data_offset": 2048, 00:24:32.961 "data_size": 63488 00:24:32.961 }, 00:24:32.961 { 00:24:32.961 "name": "BaseBdev4", 00:24:32.961 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:32.961 "is_configured": true, 00:24:32.961 "data_offset": 2048, 00:24:32.961 "data_size": 63488 00:24:32.961 } 00:24:32.961 ] 00:24:32.961 }' 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.961 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.537 [2024-11-20 07:21:57.620835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.537 "name": "Existed_Raid", 00:24:33.537 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:33.537 "strip_size_kb": 64, 00:24:33.537 "state": "configuring", 00:24:33.537 "raid_level": "raid0", 00:24:33.537 "superblock": true, 00:24:33.537 "num_base_bdevs": 4, 00:24:33.537 "num_base_bdevs_discovered": 2, 00:24:33.537 "num_base_bdevs_operational": 4, 00:24:33.537 "base_bdevs_list": [ 00:24:33.537 { 00:24:33.537 "name": null, 00:24:33.537 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:33.537 "is_configured": false, 00:24:33.537 "data_offset": 0, 00:24:33.537 "data_size": 63488 00:24:33.537 }, 00:24:33.537 { 00:24:33.537 "name": null, 00:24:33.537 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:33.537 "is_configured": false, 00:24:33.537 "data_offset": 0, 00:24:33.537 "data_size": 63488 00:24:33.537 }, 00:24:33.537 { 00:24:33.537 "name": "BaseBdev3", 00:24:33.537 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:33.537 "is_configured": true, 00:24:33.537 "data_offset": 2048, 00:24:33.537 "data_size": 63488 00:24:33.537 }, 00:24:33.537 { 00:24:33.537 "name": "BaseBdev4", 00:24:33.537 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:33.537 "is_configured": true, 00:24:33.537 "data_offset": 2048, 00:24:33.537 "data_size": 63488 00:24:33.537 } 00:24:33.537 ] 00:24:33.537 }' 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.537 07:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.105 [2024-11-20 07:21:58.283838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.105 "name": "Existed_Raid", 00:24:34.105 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:34.105 "strip_size_kb": 64, 00:24:34.105 "state": "configuring", 00:24:34.105 "raid_level": "raid0", 00:24:34.105 "superblock": true, 00:24:34.105 "num_base_bdevs": 4, 00:24:34.105 "num_base_bdevs_discovered": 3, 00:24:34.105 "num_base_bdevs_operational": 4, 00:24:34.105 "base_bdevs_list": [ 00:24:34.105 { 00:24:34.105 "name": null, 00:24:34.105 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:34.105 "is_configured": false, 00:24:34.105 "data_offset": 0, 00:24:34.105 "data_size": 63488 00:24:34.105 }, 00:24:34.105 { 00:24:34.105 "name": "BaseBdev2", 00:24:34.105 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:34.105 "is_configured": true, 00:24:34.105 "data_offset": 2048, 00:24:34.105 "data_size": 63488 00:24:34.105 }, 00:24:34.105 { 00:24:34.105 "name": "BaseBdev3", 00:24:34.105 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:34.105 "is_configured": true, 00:24:34.105 "data_offset": 2048, 00:24:34.105 "data_size": 63488 00:24:34.105 }, 00:24:34.105 { 00:24:34.105 "name": "BaseBdev4", 00:24:34.105 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:34.105 "is_configured": true, 00:24:34.105 "data_offset": 2048, 00:24:34.105 "data_size": 63488 00:24:34.105 } 00:24:34.105 ] 00:24:34.105 }' 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.105 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abe4b433-1759-434c-9b29-f99830a59e1e 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.672 [2024-11-20 07:21:58.942016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:34.672 [2024-11-20 07:21:58.942319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:34.672 [2024-11-20 07:21:58.942337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:34.672 NewBaseBdev 00:24:34.672 [2024-11-20 07:21:58.942692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:34.672 [2024-11-20 07:21:58.942901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:34.672 [2024-11-20 07:21:58.942925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:34.672 [2024-11-20 07:21:58.943080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.672 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.931 [ 00:24:34.931 { 00:24:34.931 "name": "NewBaseBdev", 00:24:34.931 "aliases": [ 00:24:34.931 "abe4b433-1759-434c-9b29-f99830a59e1e" 00:24:34.931 ], 00:24:34.931 "product_name": "Malloc disk", 00:24:34.931 "block_size": 512, 00:24:34.931 "num_blocks": 65536, 00:24:34.931 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:34.931 "assigned_rate_limits": { 00:24:34.931 "rw_ios_per_sec": 0, 00:24:34.931 "rw_mbytes_per_sec": 0, 00:24:34.931 "r_mbytes_per_sec": 0, 00:24:34.931 "w_mbytes_per_sec": 0 00:24:34.931 }, 00:24:34.931 "claimed": true, 00:24:34.931 "claim_type": "exclusive_write", 00:24:34.931 "zoned": false, 00:24:34.931 "supported_io_types": { 00:24:34.931 "read": true, 00:24:34.931 "write": true, 00:24:34.931 "unmap": true, 00:24:34.931 "flush": true, 00:24:34.931 "reset": true, 00:24:34.931 "nvme_admin": false, 00:24:34.931 "nvme_io": false, 00:24:34.931 "nvme_io_md": false, 00:24:34.931 "write_zeroes": true, 00:24:34.931 "zcopy": true, 00:24:34.931 "get_zone_info": false, 00:24:34.931 "zone_management": false, 00:24:34.931 "zone_append": false, 00:24:34.931 "compare": false, 00:24:34.931 "compare_and_write": false, 00:24:34.931 "abort": true, 00:24:34.931 "seek_hole": false, 00:24:34.931 "seek_data": false, 00:24:34.931 "copy": true, 00:24:34.931 "nvme_iov_md": false 00:24:34.931 }, 00:24:34.931 "memory_domains": [ 00:24:34.931 { 00:24:34.931 "dma_device_id": "system", 00:24:34.931 "dma_device_type": 1 00:24:34.931 }, 00:24:34.931 { 00:24:34.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.931 "dma_device_type": 2 00:24:34.931 } 00:24:34.931 ], 00:24:34.931 "driver_specific": {} 00:24:34.931 } 00:24:34.931 ] 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.931 07:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.931 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.931 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.931 "name": "Existed_Raid", 00:24:34.931 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:34.931 "strip_size_kb": 64, 00:24:34.931 "state": "online", 00:24:34.931 "raid_level": "raid0", 00:24:34.931 "superblock": true, 00:24:34.931 "num_base_bdevs": 4, 00:24:34.931 "num_base_bdevs_discovered": 4, 00:24:34.931 "num_base_bdevs_operational": 4, 00:24:34.931 "base_bdevs_list": [ 00:24:34.931 { 00:24:34.931 "name": "NewBaseBdev", 00:24:34.931 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:34.931 "is_configured": true, 00:24:34.931 "data_offset": 2048, 00:24:34.931 "data_size": 63488 00:24:34.931 }, 00:24:34.931 { 00:24:34.931 "name": "BaseBdev2", 00:24:34.931 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:34.931 "is_configured": true, 00:24:34.931 "data_offset": 2048, 00:24:34.931 "data_size": 63488 00:24:34.931 }, 00:24:34.931 { 00:24:34.931 "name": "BaseBdev3", 00:24:34.931 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:34.931 "is_configured": true, 00:24:34.931 "data_offset": 2048, 00:24:34.931 "data_size": 63488 00:24:34.931 }, 00:24:34.931 { 00:24:34.931 "name": "BaseBdev4", 00:24:34.931 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:34.931 "is_configured": true, 00:24:34.931 "data_offset": 2048, 00:24:34.931 "data_size": 63488 00:24:34.931 } 00:24:34.931 ] 00:24:34.931 }' 00:24:34.931 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.931 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.498 [2024-11-20 07:21:59.522703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.498 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:35.498 "name": "Existed_Raid", 00:24:35.498 "aliases": [ 00:24:35.498 "0051b7a6-c925-4706-a375-30bad6a9e320" 00:24:35.498 ], 00:24:35.498 "product_name": "Raid Volume", 00:24:35.498 "block_size": 512, 00:24:35.498 "num_blocks": 253952, 00:24:35.498 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:35.498 "assigned_rate_limits": { 00:24:35.498 "rw_ios_per_sec": 0, 00:24:35.499 "rw_mbytes_per_sec": 0, 00:24:35.499 "r_mbytes_per_sec": 0, 00:24:35.499 "w_mbytes_per_sec": 0 00:24:35.499 }, 00:24:35.499 "claimed": false, 00:24:35.499 "zoned": false, 00:24:35.499 "supported_io_types": { 00:24:35.499 "read": true, 00:24:35.499 "write": true, 00:24:35.499 "unmap": true, 00:24:35.499 "flush": true, 00:24:35.499 "reset": true, 00:24:35.499 "nvme_admin": false, 00:24:35.499 "nvme_io": false, 00:24:35.499 "nvme_io_md": false, 00:24:35.499 "write_zeroes": true, 00:24:35.499 "zcopy": false, 00:24:35.499 "get_zone_info": false, 00:24:35.499 "zone_management": false, 00:24:35.499 "zone_append": false, 00:24:35.499 "compare": false, 00:24:35.499 "compare_and_write": false, 00:24:35.499 "abort": false, 00:24:35.499 "seek_hole": false, 00:24:35.499 "seek_data": false, 00:24:35.499 "copy": false, 00:24:35.499 "nvme_iov_md": false 00:24:35.499 }, 00:24:35.499 "memory_domains": [ 00:24:35.499 { 00:24:35.499 "dma_device_id": "system", 00:24:35.499 "dma_device_type": 1 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.499 "dma_device_type": 2 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "system", 00:24:35.499 "dma_device_type": 1 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.499 "dma_device_type": 2 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "system", 00:24:35.499 "dma_device_type": 1 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.499 "dma_device_type": 2 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "system", 00:24:35.499 "dma_device_type": 1 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.499 "dma_device_type": 2 00:24:35.499 } 00:24:35.499 ], 00:24:35.499 "driver_specific": { 00:24:35.499 "raid": { 00:24:35.499 "uuid": "0051b7a6-c925-4706-a375-30bad6a9e320", 00:24:35.499 "strip_size_kb": 64, 00:24:35.499 "state": "online", 00:24:35.499 "raid_level": "raid0", 00:24:35.499 "superblock": true, 00:24:35.499 "num_base_bdevs": 4, 00:24:35.499 "num_base_bdevs_discovered": 4, 00:24:35.499 "num_base_bdevs_operational": 4, 00:24:35.499 "base_bdevs_list": [ 00:24:35.499 { 00:24:35.499 "name": "NewBaseBdev", 00:24:35.499 "uuid": "abe4b433-1759-434c-9b29-f99830a59e1e", 00:24:35.499 "is_configured": true, 00:24:35.499 "data_offset": 2048, 00:24:35.499 "data_size": 63488 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "name": "BaseBdev2", 00:24:35.499 "uuid": "b96b9a2e-18f4-4da9-85b7-64d91bc5a450", 00:24:35.499 "is_configured": true, 00:24:35.499 "data_offset": 2048, 00:24:35.499 "data_size": 63488 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "name": "BaseBdev3", 00:24:35.499 "uuid": "759b3420-6cc0-4583-a212-bbbfc1592ecc", 00:24:35.499 "is_configured": true, 00:24:35.499 "data_offset": 2048, 00:24:35.499 "data_size": 63488 00:24:35.499 }, 00:24:35.499 { 00:24:35.499 "name": "BaseBdev4", 00:24:35.499 "uuid": "927684db-fb04-4007-910c-37c07a33cdb8", 00:24:35.499 "is_configured": true, 00:24:35.499 "data_offset": 2048, 00:24:35.499 "data_size": 63488 00:24:35.499 } 00:24:35.499 ] 00:24:35.499 } 00:24:35.499 } 00:24:35.499 }' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:35.499 BaseBdev2 00:24:35.499 BaseBdev3 00:24:35.499 BaseBdev4' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.499 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.759 [2024-11-20 07:21:59.910333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:35.759 [2024-11-20 07:21:59.910510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:35.759 [2024-11-20 07:21:59.910659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.759 [2024-11-20 07:21:59.910757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.759 [2024-11-20 07:21:59.910775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70353 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70353 ']' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70353 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70353 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70353' 00:24:35.759 killing process with pid 70353 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70353 00:24:35.759 [2024-11-20 07:21:59.955556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.759 07:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70353 00:24:36.327 [2024-11-20 07:22:00.316048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:37.265 07:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:37.265 00:24:37.265 real 0m12.933s 00:24:37.265 user 0m21.400s 00:24:37.265 sys 0m1.759s 00:24:37.265 07:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.265 07:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.265 ************************************ 00:24:37.265 END TEST raid_state_function_test_sb 00:24:37.265 ************************************ 00:24:37.265 07:22:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:37.265 07:22:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:37.265 07:22:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.265 07:22:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.265 ************************************ 00:24:37.265 START TEST raid_superblock_test 00:24:37.265 ************************************ 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71039 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71039 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71039 ']' 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.265 07:22:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.265 [2024-11-20 07:22:01.516784] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:37.265 [2024-11-20 07:22:01.517235] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ] 00:24:37.524 [2024-11-20 07:22:01.701730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.783 [2024-11-20 07:22:01.834242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.783 [2024-11-20 07:22:02.038763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.783 [2024-11-20 07:22:02.039043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.350 malloc1 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.350 [2024-11-20 07:22:02.573634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:38.350 [2024-11-20 07:22:02.573717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.350 [2024-11-20 07:22:02.573755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:38.350 [2024-11-20 07:22:02.573773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.350 [2024-11-20 07:22:02.576649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.350 [2024-11-20 07:22:02.576700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:38.350 pt1 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.350 malloc2 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.350 [2024-11-20 07:22:02.625495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:38.350 [2024-11-20 07:22:02.625568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.350 [2024-11-20 07:22:02.625624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:38.350 [2024-11-20 07:22:02.625642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.350 [2024-11-20 07:22:02.628358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.350 [2024-11-20 07:22:02.628405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:38.350 pt2 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.350 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.609 malloc3 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.609 [2024-11-20 07:22:02.693195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:38.609 [2024-11-20 07:22:02.693279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.609 [2024-11-20 07:22:02.693317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:38.609 [2024-11-20 07:22:02.693333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.609 [2024-11-20 07:22:02.696235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.609 [2024-11-20 07:22:02.696282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:38.609 pt3 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.609 malloc4 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.609 [2024-11-20 07:22:02.749540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:38.609 [2024-11-20 07:22:02.749626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.609 [2024-11-20 07:22:02.749661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:38.609 [2024-11-20 07:22:02.749684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.609 [2024-11-20 07:22:02.752741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.609 [2024-11-20 07:22:02.752799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:38.609 pt4 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:38.609 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.610 [2024-11-20 07:22:02.761693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:38.610 [2024-11-20 07:22:02.764235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:38.610 [2024-11-20 07:22:02.764343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:38.610 [2024-11-20 07:22:02.764445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:38.610 [2024-11-20 07:22:02.764720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:38.610 [2024-11-20 07:22:02.764739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:38.610 [2024-11-20 07:22:02.765093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:38.610 [2024-11-20 07:22:02.765334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:38.610 [2024-11-20 07:22:02.765355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:38.610 [2024-11-20 07:22:02.765633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.610 "name": "raid_bdev1", 00:24:38.610 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:38.610 "strip_size_kb": 64, 00:24:38.610 "state": "online", 00:24:38.610 "raid_level": "raid0", 00:24:38.610 "superblock": true, 00:24:38.610 "num_base_bdevs": 4, 00:24:38.610 "num_base_bdevs_discovered": 4, 00:24:38.610 "num_base_bdevs_operational": 4, 00:24:38.610 "base_bdevs_list": [ 00:24:38.610 { 00:24:38.610 "name": "pt1", 00:24:38.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.610 "is_configured": true, 00:24:38.610 "data_offset": 2048, 00:24:38.610 "data_size": 63488 00:24:38.610 }, 00:24:38.610 { 00:24:38.610 "name": "pt2", 00:24:38.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.610 "is_configured": true, 00:24:38.610 "data_offset": 2048, 00:24:38.610 "data_size": 63488 00:24:38.610 }, 00:24:38.610 { 00:24:38.610 "name": "pt3", 00:24:38.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:38.610 "is_configured": true, 00:24:38.610 "data_offset": 2048, 00:24:38.610 "data_size": 63488 00:24:38.610 }, 00:24:38.610 { 00:24:38.610 "name": "pt4", 00:24:38.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:38.610 "is_configured": true, 00:24:38.610 "data_offset": 2048, 00:24:38.610 "data_size": 63488 00:24:38.610 } 00:24:38.610 ] 00:24:38.610 }' 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.610 07:22:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.178 [2024-11-20 07:22:03.298228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:39.178 "name": "raid_bdev1", 00:24:39.178 "aliases": [ 00:24:39.178 "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5" 00:24:39.178 ], 00:24:39.178 "product_name": "Raid Volume", 00:24:39.178 "block_size": 512, 00:24:39.178 "num_blocks": 253952, 00:24:39.178 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:39.178 "assigned_rate_limits": { 00:24:39.178 "rw_ios_per_sec": 0, 00:24:39.178 "rw_mbytes_per_sec": 0, 00:24:39.178 "r_mbytes_per_sec": 0, 00:24:39.178 "w_mbytes_per_sec": 0 00:24:39.178 }, 00:24:39.178 "claimed": false, 00:24:39.178 "zoned": false, 00:24:39.178 "supported_io_types": { 00:24:39.178 "read": true, 00:24:39.178 "write": true, 00:24:39.178 "unmap": true, 00:24:39.178 "flush": true, 00:24:39.178 "reset": true, 00:24:39.178 "nvme_admin": false, 00:24:39.178 "nvme_io": false, 00:24:39.178 "nvme_io_md": false, 00:24:39.178 "write_zeroes": true, 00:24:39.178 "zcopy": false, 00:24:39.178 "get_zone_info": false, 00:24:39.178 "zone_management": false, 00:24:39.178 "zone_append": false, 00:24:39.178 "compare": false, 00:24:39.178 "compare_and_write": false, 00:24:39.178 "abort": false, 00:24:39.178 "seek_hole": false, 00:24:39.178 "seek_data": false, 00:24:39.178 "copy": false, 00:24:39.178 "nvme_iov_md": false 00:24:39.178 }, 00:24:39.178 "memory_domains": [ 00:24:39.178 { 00:24:39.178 "dma_device_id": "system", 00:24:39.178 "dma_device_type": 1 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.178 "dma_device_type": 2 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "system", 00:24:39.178 "dma_device_type": 1 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.178 "dma_device_type": 2 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "system", 00:24:39.178 "dma_device_type": 1 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.178 "dma_device_type": 2 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "system", 00:24:39.178 "dma_device_type": 1 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.178 "dma_device_type": 2 00:24:39.178 } 00:24:39.178 ], 00:24:39.178 "driver_specific": { 00:24:39.178 "raid": { 00:24:39.178 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:39.178 "strip_size_kb": 64, 00:24:39.178 "state": "online", 00:24:39.178 "raid_level": "raid0", 00:24:39.178 "superblock": true, 00:24:39.178 "num_base_bdevs": 4, 00:24:39.178 "num_base_bdevs_discovered": 4, 00:24:39.178 "num_base_bdevs_operational": 4, 00:24:39.178 "base_bdevs_list": [ 00:24:39.178 { 00:24:39.178 "name": "pt1", 00:24:39.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.178 "is_configured": true, 00:24:39.178 "data_offset": 2048, 00:24:39.178 "data_size": 63488 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "name": "pt2", 00:24:39.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.178 "is_configured": true, 00:24:39.178 "data_offset": 2048, 00:24:39.178 "data_size": 63488 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "name": "pt3", 00:24:39.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:39.178 "is_configured": true, 00:24:39.178 "data_offset": 2048, 00:24:39.178 "data_size": 63488 00:24:39.178 }, 00:24:39.178 { 00:24:39.178 "name": "pt4", 00:24:39.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:39.178 "is_configured": true, 00:24:39.178 "data_offset": 2048, 00:24:39.178 "data_size": 63488 00:24:39.178 } 00:24:39.178 ] 00:24:39.178 } 00:24:39.178 } 00:24:39.178 }' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:39.178 pt2 00:24:39.178 pt3 00:24:39.178 pt4' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.178 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.437 [2024-11-20 07:22:03.670336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.437 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8183c0f4-2c73-4e03-b643-4cf9ee55c0d5 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8183c0f4-2c73-4e03-b643-4cf9ee55c0d5 ']' 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 [2024-11-20 07:22:03.737995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:39.697 [2024-11-20 07:22:03.738039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:39.697 [2024-11-20 07:22:03.738165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:39.697 [2024-11-20 07:22:03.738262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:39.697 [2024-11-20 07:22:03.738286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.697 [2024-11-20 07:22:03.894043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:39.697 [2024-11-20 07:22:03.896736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:39.697 [2024-11-20 07:22:03.896810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:39.697 [2024-11-20 07:22:03.896869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:39.697 [2024-11-20 07:22:03.896953] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:39.697 [2024-11-20 07:22:03.897030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:39.697 [2024-11-20 07:22:03.897064] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:39.697 [2024-11-20 07:22:03.897098] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:39.697 [2024-11-20 07:22:03.897120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:39.697 [2024-11-20 07:22:03.897139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:39.697 request: 00:24:39.697 { 00:24:39.697 "name": "raid_bdev1", 00:24:39.697 "raid_level": "raid0", 00:24:39.697 "base_bdevs": [ 00:24:39.697 "malloc1", 00:24:39.697 "malloc2", 00:24:39.697 "malloc3", 00:24:39.697 "malloc4" 00:24:39.697 ], 00:24:39.697 "strip_size_kb": 64, 00:24:39.697 "superblock": false, 00:24:39.697 "method": "bdev_raid_create", 00:24:39.697 "req_id": 1 00:24:39.697 } 00:24:39.697 Got JSON-RPC error response 00:24:39.697 response: 00:24:39.697 { 00:24:39.697 "code": -17, 00:24:39.697 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:39.697 } 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.697 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.698 [2024-11-20 07:22:03.962056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:39.698 [2024-11-20 07:22:03.962282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.698 [2024-11-20 07:22:03.962421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:39.698 [2024-11-20 07:22:03.962538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.698 [2024-11-20 07:22:03.965529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.698 [2024-11-20 07:22:03.965595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:39.698 [2024-11-20 07:22:03.965712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:39.698 [2024-11-20 07:22:03.965797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:39.698 pt1 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.698 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.956 07:22:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.956 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.956 "name": "raid_bdev1", 00:24:39.956 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:39.956 "strip_size_kb": 64, 00:24:39.956 "state": "configuring", 00:24:39.957 "raid_level": "raid0", 00:24:39.957 "superblock": true, 00:24:39.957 "num_base_bdevs": 4, 00:24:39.957 "num_base_bdevs_discovered": 1, 00:24:39.957 "num_base_bdevs_operational": 4, 00:24:39.957 "base_bdevs_list": [ 00:24:39.957 { 00:24:39.957 "name": "pt1", 00:24:39.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.957 "is_configured": true, 00:24:39.957 "data_offset": 2048, 00:24:39.957 "data_size": 63488 00:24:39.957 }, 00:24:39.957 { 00:24:39.957 "name": null, 00:24:39.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.957 "is_configured": false, 00:24:39.957 "data_offset": 2048, 00:24:39.957 "data_size": 63488 00:24:39.957 }, 00:24:39.957 { 00:24:39.957 "name": null, 00:24:39.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:39.957 "is_configured": false, 00:24:39.957 "data_offset": 2048, 00:24:39.957 "data_size": 63488 00:24:39.957 }, 00:24:39.957 { 00:24:39.957 "name": null, 00:24:39.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:39.957 "is_configured": false, 00:24:39.957 "data_offset": 2048, 00:24:39.957 "data_size": 63488 00:24:39.957 } 00:24:39.957 ] 00:24:39.957 }' 00:24:39.957 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.957 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.216 [2024-11-20 07:22:04.470209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.216 [2024-11-20 07:22:04.470310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.216 [2024-11-20 07:22:04.470343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:40.216 [2024-11-20 07:22:04.470362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.216 [2024-11-20 07:22:04.470962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.216 [2024-11-20 07:22:04.471015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.216 [2024-11-20 07:22:04.471123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:40.216 [2024-11-20 07:22:04.471161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:40.216 pt2 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.216 [2024-11-20 07:22:04.482273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.216 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.475 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.475 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.475 "name": "raid_bdev1", 00:24:40.475 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:40.475 "strip_size_kb": 64, 00:24:40.475 "state": "configuring", 00:24:40.475 "raid_level": "raid0", 00:24:40.475 "superblock": true, 00:24:40.476 "num_base_bdevs": 4, 00:24:40.476 "num_base_bdevs_discovered": 1, 00:24:40.476 "num_base_bdevs_operational": 4, 00:24:40.476 "base_bdevs_list": [ 00:24:40.476 { 00:24:40.476 "name": "pt1", 00:24:40.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:40.476 "is_configured": true, 00:24:40.476 "data_offset": 2048, 00:24:40.476 "data_size": 63488 00:24:40.476 }, 00:24:40.476 { 00:24:40.476 "name": null, 00:24:40.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.476 "is_configured": false, 00:24:40.476 "data_offset": 0, 00:24:40.476 "data_size": 63488 00:24:40.476 }, 00:24:40.476 { 00:24:40.476 "name": null, 00:24:40.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:40.476 "is_configured": false, 00:24:40.476 "data_offset": 2048, 00:24:40.476 "data_size": 63488 00:24:40.476 }, 00:24:40.476 { 00:24:40.476 "name": null, 00:24:40.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:40.476 "is_configured": false, 00:24:40.476 "data_offset": 2048, 00:24:40.476 "data_size": 63488 00:24:40.476 } 00:24:40.476 ] 00:24:40.476 }' 00:24:40.476 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.476 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 [2024-11-20 07:22:04.990319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.736 [2024-11-20 07:22:04.990411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.736 [2024-11-20 07:22:04.990442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:40.736 [2024-11-20 07:22:04.990457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.736 [2024-11-20 07:22:04.991093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.736 [2024-11-20 07:22:04.991126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.736 [2024-11-20 07:22:04.991233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:40.736 [2024-11-20 07:22:04.991265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:40.736 pt2 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.736 07:22:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 [2024-11-20 07:22:05.002280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:40.736 [2024-11-20 07:22:05.002498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.736 [2024-11-20 07:22:05.002542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:40.736 [2024-11-20 07:22:05.002559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.736 [2024-11-20 07:22:05.003066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.736 [2024-11-20 07:22:05.003097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:40.736 [2024-11-20 07:22:05.003193] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:40.736 [2024-11-20 07:22:05.003220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:40.736 pt3 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 [2024-11-20 07:22:05.010279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:40.736 [2024-11-20 07:22:05.010339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.736 [2024-11-20 07:22:05.010368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:40.736 [2024-11-20 07:22:05.010382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.736 [2024-11-20 07:22:05.010951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.736 [2024-11-20 07:22:05.010992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:40.736 [2024-11-20 07:22:05.011087] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:40.736 [2024-11-20 07:22:05.011115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:40.736 [2024-11-20 07:22:05.011278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:40.736 [2024-11-20 07:22:05.011294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:40.736 [2024-11-20 07:22:05.011604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:40.736 [2024-11-20 07:22:05.011795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:40.736 [2024-11-20 07:22:05.011817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:40.736 [2024-11-20 07:22:05.011969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.736 pt4 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.736 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.024 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.024 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.024 "name": "raid_bdev1", 00:24:41.024 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:41.024 "strip_size_kb": 64, 00:24:41.024 "state": "online", 00:24:41.024 "raid_level": "raid0", 00:24:41.024 "superblock": true, 00:24:41.024 "num_base_bdevs": 4, 00:24:41.024 "num_base_bdevs_discovered": 4, 00:24:41.024 "num_base_bdevs_operational": 4, 00:24:41.024 "base_bdevs_list": [ 00:24:41.024 { 00:24:41.024 "name": "pt1", 00:24:41.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:41.024 "is_configured": true, 00:24:41.024 "data_offset": 2048, 00:24:41.024 "data_size": 63488 00:24:41.024 }, 00:24:41.024 { 00:24:41.024 "name": "pt2", 00:24:41.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.024 "is_configured": true, 00:24:41.024 "data_offset": 2048, 00:24:41.024 "data_size": 63488 00:24:41.024 }, 00:24:41.024 { 00:24:41.024 "name": "pt3", 00:24:41.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:41.024 "is_configured": true, 00:24:41.024 "data_offset": 2048, 00:24:41.024 "data_size": 63488 00:24:41.024 }, 00:24:41.024 { 00:24:41.024 "name": "pt4", 00:24:41.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:41.024 "is_configured": true, 00:24:41.024 "data_offset": 2048, 00:24:41.024 "data_size": 63488 00:24:41.024 } 00:24:41.024 ] 00:24:41.024 }' 00:24:41.024 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.024 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:41.284 [2024-11-20 07:22:05.510980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.284 "name": "raid_bdev1", 00:24:41.284 "aliases": [ 00:24:41.284 "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5" 00:24:41.284 ], 00:24:41.284 "product_name": "Raid Volume", 00:24:41.284 "block_size": 512, 00:24:41.284 "num_blocks": 253952, 00:24:41.284 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:41.284 "assigned_rate_limits": { 00:24:41.284 "rw_ios_per_sec": 0, 00:24:41.284 "rw_mbytes_per_sec": 0, 00:24:41.284 "r_mbytes_per_sec": 0, 00:24:41.284 "w_mbytes_per_sec": 0 00:24:41.284 }, 00:24:41.284 "claimed": false, 00:24:41.284 "zoned": false, 00:24:41.284 "supported_io_types": { 00:24:41.284 "read": true, 00:24:41.284 "write": true, 00:24:41.284 "unmap": true, 00:24:41.284 "flush": true, 00:24:41.284 "reset": true, 00:24:41.284 "nvme_admin": false, 00:24:41.284 "nvme_io": false, 00:24:41.284 "nvme_io_md": false, 00:24:41.284 "write_zeroes": true, 00:24:41.284 "zcopy": false, 00:24:41.284 "get_zone_info": false, 00:24:41.284 "zone_management": false, 00:24:41.284 "zone_append": false, 00:24:41.284 "compare": false, 00:24:41.284 "compare_and_write": false, 00:24:41.284 "abort": false, 00:24:41.284 "seek_hole": false, 00:24:41.284 "seek_data": false, 00:24:41.284 "copy": false, 00:24:41.284 "nvme_iov_md": false 00:24:41.284 }, 00:24:41.284 "memory_domains": [ 00:24:41.284 { 00:24:41.284 "dma_device_id": "system", 00:24:41.284 "dma_device_type": 1 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.284 "dma_device_type": 2 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "system", 00:24:41.284 "dma_device_type": 1 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.284 "dma_device_type": 2 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "system", 00:24:41.284 "dma_device_type": 1 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.284 "dma_device_type": 2 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "system", 00:24:41.284 "dma_device_type": 1 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.284 "dma_device_type": 2 00:24:41.284 } 00:24:41.284 ], 00:24:41.284 "driver_specific": { 00:24:41.284 "raid": { 00:24:41.284 "uuid": "8183c0f4-2c73-4e03-b643-4cf9ee55c0d5", 00:24:41.284 "strip_size_kb": 64, 00:24:41.284 "state": "online", 00:24:41.284 "raid_level": "raid0", 00:24:41.284 "superblock": true, 00:24:41.284 "num_base_bdevs": 4, 00:24:41.284 "num_base_bdevs_discovered": 4, 00:24:41.284 "num_base_bdevs_operational": 4, 00:24:41.284 "base_bdevs_list": [ 00:24:41.284 { 00:24:41.284 "name": "pt1", 00:24:41.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:41.284 "is_configured": true, 00:24:41.284 "data_offset": 2048, 00:24:41.284 "data_size": 63488 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "name": "pt2", 00:24:41.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.284 "is_configured": true, 00:24:41.284 "data_offset": 2048, 00:24:41.284 "data_size": 63488 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "name": "pt3", 00:24:41.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:41.284 "is_configured": true, 00:24:41.284 "data_offset": 2048, 00:24:41.284 "data_size": 63488 00:24:41.284 }, 00:24:41.284 { 00:24:41.284 "name": "pt4", 00:24:41.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:41.284 "is_configured": true, 00:24:41.284 "data_offset": 2048, 00:24:41.284 "data_size": 63488 00:24:41.284 } 00:24:41.284 ] 00:24:41.284 } 00:24:41.284 } 00:24:41.284 }' 00:24:41.284 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:41.544 pt2 00:24:41.544 pt3 00:24:41.544 pt4' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:41.544 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.804 [2024-11-20 07:22:05.882966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8183c0f4-2c73-4e03-b643-4cf9ee55c0d5 '!=' 8183c0f4-2c73-4e03-b643-4cf9ee55c0d5 ']' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71039 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71039 ']' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71039 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71039 00:24:41.804 killing process with pid 71039 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71039' 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71039 00:24:41.804 [2024-11-20 07:22:05.960516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.804 [2024-11-20 07:22:05.960625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.804 07:22:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71039 00:24:41.804 [2024-11-20 07:22:05.960754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.804 [2024-11-20 07:22:05.960771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:42.064 [2024-11-20 07:22:06.308543] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:43.442 07:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:43.442 00:24:43.442 real 0m5.929s 00:24:43.442 user 0m8.893s 00:24:43.442 sys 0m0.903s 00:24:43.442 07:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.442 07:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.442 ************************************ 00:24:43.442 END TEST raid_superblock_test 00:24:43.442 ************************************ 00:24:43.442 07:22:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:24:43.442 07:22:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:43.442 07:22:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.442 07:22:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:43.442 ************************************ 00:24:43.442 START TEST raid_read_error_test 00:24:43.442 ************************************ 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iGJP4aJ3qB 00:24:43.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71310 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71310 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71310 ']' 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.442 07:22:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.442 [2024-11-20 07:22:07.515124] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:43.442 [2024-11-20 07:22:07.515308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:24:43.442 [2024-11-20 07:22:07.695718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.701 [2024-11-20 07:22:07.816893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.960 [2024-11-20 07:22:08.012666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.960 [2024-11-20 07:22:08.012730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.219 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.219 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:44.219 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:44.219 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:44.219 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.219 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 BaseBdev1_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 true 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 [2024-11-20 07:22:08.526087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:44.481 [2024-11-20 07:22:08.526157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.481 [2024-11-20 07:22:08.526188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:44.481 [2024-11-20 07:22:08.526208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.481 [2024-11-20 07:22:08.529101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.481 [2024-11-20 07:22:08.529154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:44.481 BaseBdev1 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 BaseBdev2_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 true 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 [2024-11-20 07:22:08.586355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:44.481 [2024-11-20 07:22:08.586425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.481 [2024-11-20 07:22:08.586451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:44.481 [2024-11-20 07:22:08.586468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.481 [2024-11-20 07:22:08.589442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.481 [2024-11-20 07:22:08.589491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:44.481 BaseBdev2 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 BaseBdev3_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:44.481 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 true 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 [2024-11-20 07:22:08.652862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:44.482 [2024-11-20 07:22:08.652977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.482 [2024-11-20 07:22:08.653005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:44.482 [2024-11-20 07:22:08.653022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.482 [2024-11-20 07:22:08.655980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.482 [2024-11-20 07:22:08.656166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:44.482 BaseBdev3 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 BaseBdev4_malloc 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 true 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 [2024-11-20 07:22:08.712501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:44.482 [2024-11-20 07:22:08.712571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.482 [2024-11-20 07:22:08.712622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:44.482 [2024-11-20 07:22:08.712644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.482 [2024-11-20 07:22:08.715500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.482 [2024-11-20 07:22:08.715705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:44.482 BaseBdev4 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 [2024-11-20 07:22:08.720618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:44.482 [2024-11-20 07:22:08.723189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.482 [2024-11-20 07:22:08.723292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:44.482 [2024-11-20 07:22:08.723397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:44.482 [2024-11-20 07:22:08.723734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:24:44.482 [2024-11-20 07:22:08.723764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:44.482 [2024-11-20 07:22:08.724092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:24:44.482 [2024-11-20 07:22:08.724315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:24:44.482 [2024-11-20 07:22:08.724332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:24:44.482 [2024-11-20 07:22:08.724617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.740 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.741 "name": "raid_bdev1", 00:24:44.741 "uuid": "c6060429-d70a-4328-8c91-df8ca11da780", 00:24:44.741 "strip_size_kb": 64, 00:24:44.741 "state": "online", 00:24:44.741 "raid_level": "raid0", 00:24:44.741 "superblock": true, 00:24:44.741 "num_base_bdevs": 4, 00:24:44.741 "num_base_bdevs_discovered": 4, 00:24:44.741 "num_base_bdevs_operational": 4, 00:24:44.741 "base_bdevs_list": [ 00:24:44.741 { 00:24:44.741 "name": "BaseBdev1", 00:24:44.741 "uuid": "1f0859d3-3e4e-5b3e-ac2f-8f1a03caf9c3", 00:24:44.741 "is_configured": true, 00:24:44.741 "data_offset": 2048, 00:24:44.741 "data_size": 63488 00:24:44.741 }, 00:24:44.741 { 00:24:44.741 "name": "BaseBdev2", 00:24:44.741 "uuid": "5fc223e1-838e-5340-915f-8498ef79d90a", 00:24:44.741 "is_configured": true, 00:24:44.741 "data_offset": 2048, 00:24:44.741 "data_size": 63488 00:24:44.741 }, 00:24:44.741 { 00:24:44.741 "name": "BaseBdev3", 00:24:44.741 "uuid": "b97f3944-cc73-541b-a5c3-a9d02d3e24af", 00:24:44.741 "is_configured": true, 00:24:44.741 "data_offset": 2048, 00:24:44.741 "data_size": 63488 00:24:44.741 }, 00:24:44.741 { 00:24:44.741 "name": "BaseBdev4", 00:24:44.741 "uuid": "94fbba26-2a19-5033-9cd6-d7fb615cb7e8", 00:24:44.741 "is_configured": true, 00:24:44.741 "data_offset": 2048, 00:24:44.741 "data_size": 63488 00:24:44.741 } 00:24:44.741 ] 00:24:44.741 }' 00:24:44.741 07:22:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.741 07:22:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.999 07:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:44.999 07:22:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:45.256 [2024-11-20 07:22:09.350218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.192 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.192 "name": "raid_bdev1", 00:24:46.192 "uuid": "c6060429-d70a-4328-8c91-df8ca11da780", 00:24:46.192 "strip_size_kb": 64, 00:24:46.192 "state": "online", 00:24:46.192 "raid_level": "raid0", 00:24:46.192 "superblock": true, 00:24:46.193 "num_base_bdevs": 4, 00:24:46.193 "num_base_bdevs_discovered": 4, 00:24:46.193 "num_base_bdevs_operational": 4, 00:24:46.193 "base_bdevs_list": [ 00:24:46.193 { 00:24:46.193 "name": "BaseBdev1", 00:24:46.193 "uuid": "1f0859d3-3e4e-5b3e-ac2f-8f1a03caf9c3", 00:24:46.193 "is_configured": true, 00:24:46.193 "data_offset": 2048, 00:24:46.193 "data_size": 63488 00:24:46.193 }, 00:24:46.193 { 00:24:46.193 "name": "BaseBdev2", 00:24:46.193 "uuid": "5fc223e1-838e-5340-915f-8498ef79d90a", 00:24:46.193 "is_configured": true, 00:24:46.193 "data_offset": 2048, 00:24:46.193 "data_size": 63488 00:24:46.193 }, 00:24:46.193 { 00:24:46.193 "name": "BaseBdev3", 00:24:46.193 "uuid": "b97f3944-cc73-541b-a5c3-a9d02d3e24af", 00:24:46.193 "is_configured": true, 00:24:46.193 "data_offset": 2048, 00:24:46.193 "data_size": 63488 00:24:46.193 }, 00:24:46.193 { 00:24:46.193 "name": "BaseBdev4", 00:24:46.193 "uuid": "94fbba26-2a19-5033-9cd6-d7fb615cb7e8", 00:24:46.193 "is_configured": true, 00:24:46.193 "data_offset": 2048, 00:24:46.193 "data_size": 63488 00:24:46.193 } 00:24:46.193 ] 00:24:46.193 }' 00:24:46.193 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.193 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.781 [2024-11-20 07:22:10.798157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.781 [2024-11-20 07:22:10.798368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.781 [2024-11-20 07:22:10.801955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.781 [2024-11-20 07:22:10.802203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.781 [2024-11-20 07:22:10.802379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.781 [2024-11-20 07:22:10.802537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:24:46.781 { 00:24:46.781 "results": [ 00:24:46.781 { 00:24:46.781 "job": "raid_bdev1", 00:24:46.781 "core_mask": "0x1", 00:24:46.781 "workload": "randrw", 00:24:46.781 "percentage": 50, 00:24:46.781 "status": "finished", 00:24:46.781 "queue_depth": 1, 00:24:46.781 "io_size": 131072, 00:24:46.781 "runtime": 1.445422, 00:24:46.781 "iops": 10782.318243391896, 00:24:46.781 "mibps": 1347.789780423987, 00:24:46.781 "io_failed": 1, 00:24:46.781 "io_timeout": 0, 00:24:46.781 "avg_latency_us": 129.80518180651634, 00:24:46.781 "min_latency_us": 38.167272727272724, 00:24:46.781 "max_latency_us": 1861.8181818181818 00:24:46.781 } 00:24:46.781 ], 00:24:46.781 "core_count": 1 00:24:46.781 } 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71310 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71310 ']' 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71310 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71310 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71310' 00:24:46.781 killing process with pid 71310 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71310 00:24:46.781 [2024-11-20 07:22:10.844609] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.781 07:22:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71310 00:24:47.039 [2024-11-20 07:22:11.128866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iGJP4aJ3qB 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:24:47.975 ************************************ 00:24:47.975 END TEST raid_read_error_test 00:24:47.975 ************************************ 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:24:47.975 00:24:47.975 real 0m4.817s 00:24:47.975 user 0m5.921s 00:24:47.975 sys 0m0.597s 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.975 07:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.975 07:22:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:24:47.975 07:22:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:47.975 07:22:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.975 07:22:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:47.975 ************************************ 00:24:47.975 START TEST raid_write_error_test 00:24:47.975 ************************************ 00:24:47.975 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:24:47.975 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:24:47.975 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:24:47.975 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:48.234 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qV8lT4QaaH 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71456 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71456 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71456 ']' 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.235 07:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.235 [2024-11-20 07:22:12.386216] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:48.235 [2024-11-20 07:22:12.386450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71456 ] 00:24:48.493 [2024-11-20 07:22:12.566185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.493 [2024-11-20 07:22:12.695834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.752 [2024-11-20 07:22:12.901780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.752 [2024-11-20 07:22:12.901848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.319 BaseBdev1_malloc 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.319 true 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.319 [2024-11-20 07:22:13.368458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:49.319 [2024-11-20 07:22:13.368537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.319 [2024-11-20 07:22:13.368566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:49.319 [2024-11-20 07:22:13.368583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.319 [2024-11-20 07:22:13.371500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.319 [2024-11-20 07:22:13.371564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:49.319 BaseBdev1 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.319 BaseBdev2_malloc 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.319 true 00:24:49.319 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 [2024-11-20 07:22:13.431704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:49.320 [2024-11-20 07:22:13.431783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.320 [2024-11-20 07:22:13.431809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:49.320 [2024-11-20 07:22:13.431827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.320 [2024-11-20 07:22:13.434788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.320 [2024-11-20 07:22:13.434847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:49.320 BaseBdev2 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 BaseBdev3_malloc 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 true 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 [2024-11-20 07:22:13.505358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:49.320 [2024-11-20 07:22:13.505425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.320 [2024-11-20 07:22:13.505452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:49.320 [2024-11-20 07:22:13.505471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.320 [2024-11-20 07:22:13.508467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.320 [2024-11-20 07:22:13.508515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:49.320 BaseBdev3 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 BaseBdev4_malloc 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 true 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 [2024-11-20 07:22:13.565595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:49.320 [2024-11-20 07:22:13.565685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.320 [2024-11-20 07:22:13.565712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:49.320 [2024-11-20 07:22:13.565730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.320 [2024-11-20 07:22:13.568555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.320 [2024-11-20 07:22:13.568676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:49.320 BaseBdev4 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 [2024-11-20 07:22:13.573671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.320 [2024-11-20 07:22:13.576154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:49.320 [2024-11-20 07:22:13.576289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:49.320 [2024-11-20 07:22:13.576381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:49.320 [2024-11-20 07:22:13.576699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:24:49.320 [2024-11-20 07:22:13.576726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:49.320 [2024-11-20 07:22:13.577022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:24:49.320 [2024-11-20 07:22:13.577230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:24:49.320 [2024-11-20 07:22:13.577250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:24:49.320 [2024-11-20 07:22:13.577420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.579 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.579 "name": "raid_bdev1", 00:24:49.579 "uuid": "789499eb-36ce-46bd-87d9-f2c4e06a5223", 00:24:49.579 "strip_size_kb": 64, 00:24:49.579 "state": "online", 00:24:49.579 "raid_level": "raid0", 00:24:49.579 "superblock": true, 00:24:49.579 "num_base_bdevs": 4, 00:24:49.579 "num_base_bdevs_discovered": 4, 00:24:49.579 "num_base_bdevs_operational": 4, 00:24:49.579 "base_bdevs_list": [ 00:24:49.579 { 00:24:49.579 "name": "BaseBdev1", 00:24:49.579 "uuid": "46c01108-36f6-5049-9342-5eda90ba0558", 00:24:49.579 "is_configured": true, 00:24:49.579 "data_offset": 2048, 00:24:49.579 "data_size": 63488 00:24:49.579 }, 00:24:49.579 { 00:24:49.579 "name": "BaseBdev2", 00:24:49.579 "uuid": "a991fd08-c300-5659-9b25-f41f6a14901f", 00:24:49.579 "is_configured": true, 00:24:49.579 "data_offset": 2048, 00:24:49.579 "data_size": 63488 00:24:49.579 }, 00:24:49.579 { 00:24:49.579 "name": "BaseBdev3", 00:24:49.579 "uuid": "b7e6083f-e347-5d4b-8607-b33cbe3466fb", 00:24:49.579 "is_configured": true, 00:24:49.579 "data_offset": 2048, 00:24:49.579 "data_size": 63488 00:24:49.579 }, 00:24:49.579 { 00:24:49.579 "name": "BaseBdev4", 00:24:49.579 "uuid": "d08412c4-3958-5a4e-be95-570caea357af", 00:24:49.579 "is_configured": true, 00:24:49.579 "data_offset": 2048, 00:24:49.579 "data_size": 63488 00:24:49.579 } 00:24:49.579 ] 00:24:49.579 }' 00:24:49.579 07:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.579 07:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.838 07:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:49.838 07:22:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:50.096 [2024-11-20 07:22:14.239307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.033 "name": "raid_bdev1", 00:24:51.033 "uuid": "789499eb-36ce-46bd-87d9-f2c4e06a5223", 00:24:51.033 "strip_size_kb": 64, 00:24:51.033 "state": "online", 00:24:51.033 "raid_level": "raid0", 00:24:51.033 "superblock": true, 00:24:51.033 "num_base_bdevs": 4, 00:24:51.033 "num_base_bdevs_discovered": 4, 00:24:51.033 "num_base_bdevs_operational": 4, 00:24:51.033 "base_bdevs_list": [ 00:24:51.033 { 00:24:51.033 "name": "BaseBdev1", 00:24:51.033 "uuid": "46c01108-36f6-5049-9342-5eda90ba0558", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev2", 00:24:51.033 "uuid": "a991fd08-c300-5659-9b25-f41f6a14901f", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev3", 00:24:51.033 "uuid": "b7e6083f-e347-5d4b-8607-b33cbe3466fb", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev4", 00:24:51.033 "uuid": "d08412c4-3958-5a4e-be95-570caea357af", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 } 00:24:51.033 ] 00:24:51.033 }' 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.033 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.600 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:51.600 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.600 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.600 [2024-11-20 07:22:15.674641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:51.600 [2024-11-20 07:22:15.674840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:51.600 [2024-11-20 07:22:15.678363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:51.600 [2024-11-20 07:22:15.678438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.601 [2024-11-20 07:22:15.678498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:51.601 [2024-11-20 07:22:15.678518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:24:51.601 { 00:24:51.601 "results": [ 00:24:51.601 { 00:24:51.601 "job": "raid_bdev1", 00:24:51.601 "core_mask": "0x1", 00:24:51.601 "workload": "randrw", 00:24:51.601 "percentage": 50, 00:24:51.601 "status": "finished", 00:24:51.601 "queue_depth": 1, 00:24:51.601 "io_size": 131072, 00:24:51.601 "runtime": 1.43316, 00:24:51.601 "iops": 10818.052415640961, 00:24:51.601 "mibps": 1352.2565519551201, 00:24:51.601 "io_failed": 1, 00:24:51.601 "io_timeout": 0, 00:24:51.601 "avg_latency_us": 129.27782357597255, 00:24:51.601 "min_latency_us": 39.56363636363636, 00:24:51.601 "max_latency_us": 1876.7127272727273 00:24:51.601 } 00:24:51.601 ], 00:24:51.601 "core_count": 1 00:24:51.601 } 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71456 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71456 ']' 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71456 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71456 00:24:51.601 killing process with pid 71456 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71456' 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71456 00:24:51.601 [2024-11-20 07:22:15.712646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.601 07:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71456 00:24:51.860 [2024-11-20 07:22:16.000256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qV8lT4QaaH 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:52.889 ************************************ 00:24:52.889 END TEST raid_write_error_test 00:24:52.889 ************************************ 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:24:52.889 00:24:52.889 real 0m4.822s 00:24:52.889 user 0m5.954s 00:24:52.889 sys 0m0.603s 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.889 07:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.889 07:22:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:52.889 07:22:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:24:52.889 07:22:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:52.889 07:22:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.889 07:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:52.889 ************************************ 00:24:52.889 START TEST raid_state_function_test 00:24:52.889 ************************************ 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:52.889 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71600 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:52.890 Process raid pid: 71600 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71600' 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71600 00:24:52.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71600 ']' 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.890 07:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.149 [2024-11-20 07:22:17.256355] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:53.149 [2024-11-20 07:22:17.257396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.408 [2024-11-20 07:22:17.451504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.408 [2024-11-20 07:22:17.621753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.667 [2024-11-20 07:22:17.828573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.667 [2024-11-20 07:22:17.828631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.235 [2024-11-20 07:22:18.259278] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:54.235 [2024-11-20 07:22:18.259353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:54.235 [2024-11-20 07:22:18.259369] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.235 [2024-11-20 07:22:18.259385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.235 [2024-11-20 07:22:18.259393] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:54.235 [2024-11-20 07:22:18.259423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:54.235 [2024-11-20 07:22:18.259433] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:54.235 [2024-11-20 07:22:18.259446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.235 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.235 "name": "Existed_Raid", 00:24:54.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.235 "strip_size_kb": 64, 00:24:54.235 "state": "configuring", 00:24:54.235 "raid_level": "concat", 00:24:54.235 "superblock": false, 00:24:54.235 "num_base_bdevs": 4, 00:24:54.235 "num_base_bdevs_discovered": 0, 00:24:54.235 "num_base_bdevs_operational": 4, 00:24:54.235 "base_bdevs_list": [ 00:24:54.235 { 00:24:54.235 "name": "BaseBdev1", 00:24:54.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.235 "is_configured": false, 00:24:54.235 "data_offset": 0, 00:24:54.235 "data_size": 0 00:24:54.235 }, 00:24:54.235 { 00:24:54.235 "name": "BaseBdev2", 00:24:54.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.235 "is_configured": false, 00:24:54.235 "data_offset": 0, 00:24:54.235 "data_size": 0 00:24:54.235 }, 00:24:54.235 { 00:24:54.235 "name": "BaseBdev3", 00:24:54.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.236 "is_configured": false, 00:24:54.236 "data_offset": 0, 00:24:54.236 "data_size": 0 00:24:54.236 }, 00:24:54.236 { 00:24:54.236 "name": "BaseBdev4", 00:24:54.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.236 "is_configured": false, 00:24:54.236 "data_offset": 0, 00:24:54.236 "data_size": 0 00:24:54.236 } 00:24:54.236 ] 00:24:54.236 }' 00:24:54.236 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.236 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.494 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:54.494 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.494 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.753 [2024-11-20 07:22:18.783417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:54.753 [2024-11-20 07:22:18.783463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:54.753 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.753 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:54.753 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.753 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.753 [2024-11-20 07:22:18.791388] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:54.753 [2024-11-20 07:22:18.791454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:54.754 [2024-11-20 07:22:18.791468] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.754 [2024-11-20 07:22:18.791483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.754 [2024-11-20 07:22:18.791492] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:54.754 [2024-11-20 07:22:18.791505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:54.754 [2024-11-20 07:22:18.791514] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:54.754 [2024-11-20 07:22:18.791527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.754 [2024-11-20 07:22:18.836845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.754 BaseBdev1 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.754 [ 00:24:54.754 { 00:24:54.754 "name": "BaseBdev1", 00:24:54.754 "aliases": [ 00:24:54.754 "96f6d591-0274-4947-8ad2-cee49fe2ec79" 00:24:54.754 ], 00:24:54.754 "product_name": "Malloc disk", 00:24:54.754 "block_size": 512, 00:24:54.754 "num_blocks": 65536, 00:24:54.754 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:54.754 "assigned_rate_limits": { 00:24:54.754 "rw_ios_per_sec": 0, 00:24:54.754 "rw_mbytes_per_sec": 0, 00:24:54.754 "r_mbytes_per_sec": 0, 00:24:54.754 "w_mbytes_per_sec": 0 00:24:54.754 }, 00:24:54.754 "claimed": true, 00:24:54.754 "claim_type": "exclusive_write", 00:24:54.754 "zoned": false, 00:24:54.754 "supported_io_types": { 00:24:54.754 "read": true, 00:24:54.754 "write": true, 00:24:54.754 "unmap": true, 00:24:54.754 "flush": true, 00:24:54.754 "reset": true, 00:24:54.754 "nvme_admin": false, 00:24:54.754 "nvme_io": false, 00:24:54.754 "nvme_io_md": false, 00:24:54.754 "write_zeroes": true, 00:24:54.754 "zcopy": true, 00:24:54.754 "get_zone_info": false, 00:24:54.754 "zone_management": false, 00:24:54.754 "zone_append": false, 00:24:54.754 "compare": false, 00:24:54.754 "compare_and_write": false, 00:24:54.754 "abort": true, 00:24:54.754 "seek_hole": false, 00:24:54.754 "seek_data": false, 00:24:54.754 "copy": true, 00:24:54.754 "nvme_iov_md": false 00:24:54.754 }, 00:24:54.754 "memory_domains": [ 00:24:54.754 { 00:24:54.754 "dma_device_id": "system", 00:24:54.754 "dma_device_type": 1 00:24:54.754 }, 00:24:54.754 { 00:24:54.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.754 "dma_device_type": 2 00:24:54.754 } 00:24:54.754 ], 00:24:54.754 "driver_specific": {} 00:24:54.754 } 00:24:54.754 ] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.754 "name": "Existed_Raid", 00:24:54.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.754 "strip_size_kb": 64, 00:24:54.754 "state": "configuring", 00:24:54.754 "raid_level": "concat", 00:24:54.754 "superblock": false, 00:24:54.754 "num_base_bdevs": 4, 00:24:54.754 "num_base_bdevs_discovered": 1, 00:24:54.754 "num_base_bdevs_operational": 4, 00:24:54.754 "base_bdevs_list": [ 00:24:54.754 { 00:24:54.754 "name": "BaseBdev1", 00:24:54.754 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:54.754 "is_configured": true, 00:24:54.754 "data_offset": 0, 00:24:54.754 "data_size": 65536 00:24:54.754 }, 00:24:54.754 { 00:24:54.754 "name": "BaseBdev2", 00:24:54.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.754 "is_configured": false, 00:24:54.754 "data_offset": 0, 00:24:54.754 "data_size": 0 00:24:54.754 }, 00:24:54.754 { 00:24:54.754 "name": "BaseBdev3", 00:24:54.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.754 "is_configured": false, 00:24:54.754 "data_offset": 0, 00:24:54.754 "data_size": 0 00:24:54.754 }, 00:24:54.754 { 00:24:54.754 "name": "BaseBdev4", 00:24:54.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.754 "is_configured": false, 00:24:54.754 "data_offset": 0, 00:24:54.754 "data_size": 0 00:24:54.754 } 00:24:54.754 ] 00:24:54.754 }' 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.754 07:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.322 [2024-11-20 07:22:19.417055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:55.322 [2024-11-20 07:22:19.417129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.322 [2024-11-20 07:22:19.425123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:55.322 [2024-11-20 07:22:19.427571] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:55.322 [2024-11-20 07:22:19.427801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:55.322 [2024-11-20 07:22:19.427829] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:55.322 [2024-11-20 07:22:19.427849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:55.322 [2024-11-20 07:22:19.427860] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:55.322 [2024-11-20 07:22:19.427874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.322 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.323 "name": "Existed_Raid", 00:24:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.323 "strip_size_kb": 64, 00:24:55.323 "state": "configuring", 00:24:55.323 "raid_level": "concat", 00:24:55.323 "superblock": false, 00:24:55.323 "num_base_bdevs": 4, 00:24:55.323 "num_base_bdevs_discovered": 1, 00:24:55.323 "num_base_bdevs_operational": 4, 00:24:55.323 "base_bdevs_list": [ 00:24:55.323 { 00:24:55.323 "name": "BaseBdev1", 00:24:55.323 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:55.323 "is_configured": true, 00:24:55.323 "data_offset": 0, 00:24:55.323 "data_size": 65536 00:24:55.323 }, 00:24:55.323 { 00:24:55.323 "name": "BaseBdev2", 00:24:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.323 "is_configured": false, 00:24:55.323 "data_offset": 0, 00:24:55.323 "data_size": 0 00:24:55.323 }, 00:24:55.323 { 00:24:55.323 "name": "BaseBdev3", 00:24:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.323 "is_configured": false, 00:24:55.323 "data_offset": 0, 00:24:55.323 "data_size": 0 00:24:55.323 }, 00:24:55.323 { 00:24:55.323 "name": "BaseBdev4", 00:24:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.323 "is_configured": false, 00:24:55.323 "data_offset": 0, 00:24:55.323 "data_size": 0 00:24:55.323 } 00:24:55.323 ] 00:24:55.323 }' 00:24:55.323 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.323 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.889 07:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:55.889 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.889 07:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.889 [2024-11-20 07:22:20.004749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.889 BaseBdev2 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.889 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.889 [ 00:24:55.889 { 00:24:55.889 "name": "BaseBdev2", 00:24:55.889 "aliases": [ 00:24:55.889 "141db11e-23c9-41fd-826b-7ccfb82565f4" 00:24:55.889 ], 00:24:55.889 "product_name": "Malloc disk", 00:24:55.889 "block_size": 512, 00:24:55.889 "num_blocks": 65536, 00:24:55.889 "uuid": "141db11e-23c9-41fd-826b-7ccfb82565f4", 00:24:55.889 "assigned_rate_limits": { 00:24:55.889 "rw_ios_per_sec": 0, 00:24:55.889 "rw_mbytes_per_sec": 0, 00:24:55.889 "r_mbytes_per_sec": 0, 00:24:55.889 "w_mbytes_per_sec": 0 00:24:55.890 }, 00:24:55.890 "claimed": true, 00:24:55.890 "claim_type": "exclusive_write", 00:24:55.890 "zoned": false, 00:24:55.890 "supported_io_types": { 00:24:55.890 "read": true, 00:24:55.890 "write": true, 00:24:55.890 "unmap": true, 00:24:55.890 "flush": true, 00:24:55.890 "reset": true, 00:24:55.890 "nvme_admin": false, 00:24:55.890 "nvme_io": false, 00:24:55.890 "nvme_io_md": false, 00:24:55.890 "write_zeroes": true, 00:24:55.890 "zcopy": true, 00:24:55.890 "get_zone_info": false, 00:24:55.890 "zone_management": false, 00:24:55.890 "zone_append": false, 00:24:55.890 "compare": false, 00:24:55.890 "compare_and_write": false, 00:24:55.890 "abort": true, 00:24:55.890 "seek_hole": false, 00:24:55.890 "seek_data": false, 00:24:55.890 "copy": true, 00:24:55.890 "nvme_iov_md": false 00:24:55.890 }, 00:24:55.890 "memory_domains": [ 00:24:55.890 { 00:24:55.890 "dma_device_id": "system", 00:24:55.890 "dma_device_type": 1 00:24:55.890 }, 00:24:55.890 { 00:24:55.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.890 "dma_device_type": 2 00:24:55.890 } 00:24:55.890 ], 00:24:55.890 "driver_specific": {} 00:24:55.890 } 00:24:55.890 ] 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.890 "name": "Existed_Raid", 00:24:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.890 "strip_size_kb": 64, 00:24:55.890 "state": "configuring", 00:24:55.890 "raid_level": "concat", 00:24:55.890 "superblock": false, 00:24:55.890 "num_base_bdevs": 4, 00:24:55.890 "num_base_bdevs_discovered": 2, 00:24:55.890 "num_base_bdevs_operational": 4, 00:24:55.890 "base_bdevs_list": [ 00:24:55.890 { 00:24:55.890 "name": "BaseBdev1", 00:24:55.890 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:55.890 "is_configured": true, 00:24:55.890 "data_offset": 0, 00:24:55.890 "data_size": 65536 00:24:55.890 }, 00:24:55.890 { 00:24:55.890 "name": "BaseBdev2", 00:24:55.890 "uuid": "141db11e-23c9-41fd-826b-7ccfb82565f4", 00:24:55.890 "is_configured": true, 00:24:55.890 "data_offset": 0, 00:24:55.890 "data_size": 65536 00:24:55.890 }, 00:24:55.890 { 00:24:55.890 "name": "BaseBdev3", 00:24:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.890 "is_configured": false, 00:24:55.890 "data_offset": 0, 00:24:55.890 "data_size": 0 00:24:55.890 }, 00:24:55.890 { 00:24:55.890 "name": "BaseBdev4", 00:24:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.890 "is_configured": false, 00:24:55.890 "data_offset": 0, 00:24:55.890 "data_size": 0 00:24:55.890 } 00:24:55.890 ] 00:24:55.890 }' 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.890 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 BaseBdev3 00:24:56.458 [2024-11-20 07:22:20.614912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 [ 00:24:56.458 { 00:24:56.458 "name": "BaseBdev3", 00:24:56.458 "aliases": [ 00:24:56.458 "d098431d-4828-45be-9487-079bb54644e6" 00:24:56.458 ], 00:24:56.458 "product_name": "Malloc disk", 00:24:56.458 "block_size": 512, 00:24:56.458 "num_blocks": 65536, 00:24:56.458 "uuid": "d098431d-4828-45be-9487-079bb54644e6", 00:24:56.458 "assigned_rate_limits": { 00:24:56.458 "rw_ios_per_sec": 0, 00:24:56.458 "rw_mbytes_per_sec": 0, 00:24:56.458 "r_mbytes_per_sec": 0, 00:24:56.458 "w_mbytes_per_sec": 0 00:24:56.458 }, 00:24:56.458 "claimed": true, 00:24:56.458 "claim_type": "exclusive_write", 00:24:56.458 "zoned": false, 00:24:56.458 "supported_io_types": { 00:24:56.458 "read": true, 00:24:56.458 "write": true, 00:24:56.458 "unmap": true, 00:24:56.458 "flush": true, 00:24:56.458 "reset": true, 00:24:56.458 "nvme_admin": false, 00:24:56.458 "nvme_io": false, 00:24:56.458 "nvme_io_md": false, 00:24:56.458 "write_zeroes": true, 00:24:56.458 "zcopy": true, 00:24:56.458 "get_zone_info": false, 00:24:56.458 "zone_management": false, 00:24:56.458 "zone_append": false, 00:24:56.458 "compare": false, 00:24:56.458 "compare_and_write": false, 00:24:56.458 "abort": true, 00:24:56.458 "seek_hole": false, 00:24:56.458 "seek_data": false, 00:24:56.458 "copy": true, 00:24:56.458 "nvme_iov_md": false 00:24:56.458 }, 00:24:56.458 "memory_domains": [ 00:24:56.458 { 00:24:56.458 "dma_device_id": "system", 00:24:56.458 "dma_device_type": 1 00:24:56.458 }, 00:24:56.458 { 00:24:56.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.458 "dma_device_type": 2 00:24:56.458 } 00:24:56.458 ], 00:24:56.458 "driver_specific": {} 00:24:56.458 } 00:24:56.458 ] 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.458 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.458 "name": "Existed_Raid", 00:24:56.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.458 "strip_size_kb": 64, 00:24:56.458 "state": "configuring", 00:24:56.458 "raid_level": "concat", 00:24:56.458 "superblock": false, 00:24:56.458 "num_base_bdevs": 4, 00:24:56.458 "num_base_bdevs_discovered": 3, 00:24:56.458 "num_base_bdevs_operational": 4, 00:24:56.458 "base_bdevs_list": [ 00:24:56.458 { 00:24:56.458 "name": "BaseBdev1", 00:24:56.459 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:56.459 "is_configured": true, 00:24:56.459 "data_offset": 0, 00:24:56.459 "data_size": 65536 00:24:56.459 }, 00:24:56.459 { 00:24:56.459 "name": "BaseBdev2", 00:24:56.459 "uuid": "141db11e-23c9-41fd-826b-7ccfb82565f4", 00:24:56.459 "is_configured": true, 00:24:56.459 "data_offset": 0, 00:24:56.459 "data_size": 65536 00:24:56.459 }, 00:24:56.459 { 00:24:56.459 "name": "BaseBdev3", 00:24:56.459 "uuid": "d098431d-4828-45be-9487-079bb54644e6", 00:24:56.459 "is_configured": true, 00:24:56.459 "data_offset": 0, 00:24:56.459 "data_size": 65536 00:24:56.459 }, 00:24:56.459 { 00:24:56.459 "name": "BaseBdev4", 00:24:56.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.459 "is_configured": false, 00:24:56.459 "data_offset": 0, 00:24:56.459 "data_size": 0 00:24:56.459 } 00:24:56.459 ] 00:24:56.459 }' 00:24:56.459 07:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.459 07:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 [2024-11-20 07:22:21.226967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:57.030 [2024-11-20 07:22:21.227175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:57.030 [2024-11-20 07:22:21.227209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:57.030 [2024-11-20 07:22:21.227574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:57.030 [2024-11-20 07:22:21.227837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:57.030 [2024-11-20 07:22:21.227862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:57.030 BaseBdev4 00:24:57.030 [2024-11-20 07:22:21.228173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 [ 00:24:57.030 { 00:24:57.030 "name": "BaseBdev4", 00:24:57.030 "aliases": [ 00:24:57.030 "896391a4-dff0-4563-89f9-5945cecec690" 00:24:57.030 ], 00:24:57.030 "product_name": "Malloc disk", 00:24:57.030 "block_size": 512, 00:24:57.030 "num_blocks": 65536, 00:24:57.030 "uuid": "896391a4-dff0-4563-89f9-5945cecec690", 00:24:57.030 "assigned_rate_limits": { 00:24:57.030 "rw_ios_per_sec": 0, 00:24:57.030 "rw_mbytes_per_sec": 0, 00:24:57.030 "r_mbytes_per_sec": 0, 00:24:57.030 "w_mbytes_per_sec": 0 00:24:57.030 }, 00:24:57.030 "claimed": true, 00:24:57.030 "claim_type": "exclusive_write", 00:24:57.030 "zoned": false, 00:24:57.030 "supported_io_types": { 00:24:57.030 "read": true, 00:24:57.030 "write": true, 00:24:57.030 "unmap": true, 00:24:57.030 "flush": true, 00:24:57.030 "reset": true, 00:24:57.030 "nvme_admin": false, 00:24:57.030 "nvme_io": false, 00:24:57.030 "nvme_io_md": false, 00:24:57.030 "write_zeroes": true, 00:24:57.030 "zcopy": true, 00:24:57.030 "get_zone_info": false, 00:24:57.030 "zone_management": false, 00:24:57.030 "zone_append": false, 00:24:57.030 "compare": false, 00:24:57.030 "compare_and_write": false, 00:24:57.030 "abort": true, 00:24:57.030 "seek_hole": false, 00:24:57.030 "seek_data": false, 00:24:57.030 "copy": true, 00:24:57.030 "nvme_iov_md": false 00:24:57.030 }, 00:24:57.030 "memory_domains": [ 00:24:57.030 { 00:24:57.030 "dma_device_id": "system", 00:24:57.030 "dma_device_type": 1 00:24:57.030 }, 00:24:57.030 { 00:24:57.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.030 "dma_device_type": 2 00:24:57.030 } 00:24:57.030 ], 00:24:57.030 "driver_specific": {} 00:24:57.030 } 00:24:57.030 ] 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.290 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.290 "name": "Existed_Raid", 00:24:57.290 "uuid": "cb3328aa-d75a-4567-bd7f-00ae90b4f0b0", 00:24:57.290 "strip_size_kb": 64, 00:24:57.290 "state": "online", 00:24:57.290 "raid_level": "concat", 00:24:57.290 "superblock": false, 00:24:57.290 "num_base_bdevs": 4, 00:24:57.290 "num_base_bdevs_discovered": 4, 00:24:57.290 "num_base_bdevs_operational": 4, 00:24:57.290 "base_bdevs_list": [ 00:24:57.290 { 00:24:57.290 "name": "BaseBdev1", 00:24:57.290 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:57.290 "is_configured": true, 00:24:57.290 "data_offset": 0, 00:24:57.290 "data_size": 65536 00:24:57.290 }, 00:24:57.290 { 00:24:57.290 "name": "BaseBdev2", 00:24:57.290 "uuid": "141db11e-23c9-41fd-826b-7ccfb82565f4", 00:24:57.290 "is_configured": true, 00:24:57.290 "data_offset": 0, 00:24:57.290 "data_size": 65536 00:24:57.290 }, 00:24:57.290 { 00:24:57.290 "name": "BaseBdev3", 00:24:57.290 "uuid": "d098431d-4828-45be-9487-079bb54644e6", 00:24:57.290 "is_configured": true, 00:24:57.290 "data_offset": 0, 00:24:57.290 "data_size": 65536 00:24:57.290 }, 00:24:57.290 { 00:24:57.290 "name": "BaseBdev4", 00:24:57.290 "uuid": "896391a4-dff0-4563-89f9-5945cecec690", 00:24:57.290 "is_configured": true, 00:24:57.290 "data_offset": 0, 00:24:57.290 "data_size": 65536 00:24:57.290 } 00:24:57.290 ] 00:24:57.290 }' 00:24:57.290 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.290 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.548 [2024-11-20 07:22:21.815694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:57.548 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.807 "name": "Existed_Raid", 00:24:57.807 "aliases": [ 00:24:57.807 "cb3328aa-d75a-4567-bd7f-00ae90b4f0b0" 00:24:57.807 ], 00:24:57.807 "product_name": "Raid Volume", 00:24:57.807 "block_size": 512, 00:24:57.807 "num_blocks": 262144, 00:24:57.807 "uuid": "cb3328aa-d75a-4567-bd7f-00ae90b4f0b0", 00:24:57.807 "assigned_rate_limits": { 00:24:57.807 "rw_ios_per_sec": 0, 00:24:57.807 "rw_mbytes_per_sec": 0, 00:24:57.807 "r_mbytes_per_sec": 0, 00:24:57.807 "w_mbytes_per_sec": 0 00:24:57.807 }, 00:24:57.807 "claimed": false, 00:24:57.807 "zoned": false, 00:24:57.807 "supported_io_types": { 00:24:57.807 "read": true, 00:24:57.807 "write": true, 00:24:57.807 "unmap": true, 00:24:57.807 "flush": true, 00:24:57.807 "reset": true, 00:24:57.807 "nvme_admin": false, 00:24:57.807 "nvme_io": false, 00:24:57.807 "nvme_io_md": false, 00:24:57.807 "write_zeroes": true, 00:24:57.807 "zcopy": false, 00:24:57.807 "get_zone_info": false, 00:24:57.807 "zone_management": false, 00:24:57.807 "zone_append": false, 00:24:57.807 "compare": false, 00:24:57.807 "compare_and_write": false, 00:24:57.807 "abort": false, 00:24:57.807 "seek_hole": false, 00:24:57.807 "seek_data": false, 00:24:57.807 "copy": false, 00:24:57.807 "nvme_iov_md": false 00:24:57.807 }, 00:24:57.807 "memory_domains": [ 00:24:57.807 { 00:24:57.807 "dma_device_id": "system", 00:24:57.807 "dma_device_type": 1 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.807 "dma_device_type": 2 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "system", 00:24:57.807 "dma_device_type": 1 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.807 "dma_device_type": 2 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "system", 00:24:57.807 "dma_device_type": 1 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.807 "dma_device_type": 2 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "system", 00:24:57.807 "dma_device_type": 1 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.807 "dma_device_type": 2 00:24:57.807 } 00:24:57.807 ], 00:24:57.807 "driver_specific": { 00:24:57.807 "raid": { 00:24:57.807 "uuid": "cb3328aa-d75a-4567-bd7f-00ae90b4f0b0", 00:24:57.807 "strip_size_kb": 64, 00:24:57.807 "state": "online", 00:24:57.807 "raid_level": "concat", 00:24:57.807 "superblock": false, 00:24:57.807 "num_base_bdevs": 4, 00:24:57.807 "num_base_bdevs_discovered": 4, 00:24:57.807 "num_base_bdevs_operational": 4, 00:24:57.807 "base_bdevs_list": [ 00:24:57.807 { 00:24:57.807 "name": "BaseBdev1", 00:24:57.807 "uuid": "96f6d591-0274-4947-8ad2-cee49fe2ec79", 00:24:57.807 "is_configured": true, 00:24:57.807 "data_offset": 0, 00:24:57.807 "data_size": 65536 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "name": "BaseBdev2", 00:24:57.807 "uuid": "141db11e-23c9-41fd-826b-7ccfb82565f4", 00:24:57.807 "is_configured": true, 00:24:57.807 "data_offset": 0, 00:24:57.807 "data_size": 65536 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "name": "BaseBdev3", 00:24:57.807 "uuid": "d098431d-4828-45be-9487-079bb54644e6", 00:24:57.807 "is_configured": true, 00:24:57.807 "data_offset": 0, 00:24:57.807 "data_size": 65536 00:24:57.807 }, 00:24:57.807 { 00:24:57.807 "name": "BaseBdev4", 00:24:57.807 "uuid": "896391a4-dff0-4563-89f9-5945cecec690", 00:24:57.807 "is_configured": true, 00:24:57.807 "data_offset": 0, 00:24:57.807 "data_size": 65536 00:24:57.807 } 00:24:57.807 ] 00:24:57.807 } 00:24:57.807 } 00:24:57.807 }' 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:57.807 BaseBdev2 00:24:57.807 BaseBdev3 00:24:57.807 BaseBdev4' 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.807 07:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:57.807 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:57.808 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:57.808 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.067 [2024-11-20 07:22:22.195404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:58.067 [2024-11-20 07:22:22.195594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:58.067 [2024-11-20 07:22:22.195846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.067 "name": "Existed_Raid", 00:24:58.067 "uuid": "cb3328aa-d75a-4567-bd7f-00ae90b4f0b0", 00:24:58.067 "strip_size_kb": 64, 00:24:58.067 "state": "offline", 00:24:58.067 "raid_level": "concat", 00:24:58.067 "superblock": false, 00:24:58.067 "num_base_bdevs": 4, 00:24:58.067 "num_base_bdevs_discovered": 3, 00:24:58.067 "num_base_bdevs_operational": 3, 00:24:58.067 "base_bdevs_list": [ 00:24:58.067 { 00:24:58.067 "name": null, 00:24:58.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.067 "is_configured": false, 00:24:58.067 "data_offset": 0, 00:24:58.067 "data_size": 65536 00:24:58.067 }, 00:24:58.067 { 00:24:58.067 "name": "BaseBdev2", 00:24:58.067 "uuid": "141db11e-23c9-41fd-826b-7ccfb82565f4", 00:24:58.067 "is_configured": true, 00:24:58.067 "data_offset": 0, 00:24:58.067 "data_size": 65536 00:24:58.067 }, 00:24:58.067 { 00:24:58.067 "name": "BaseBdev3", 00:24:58.067 "uuid": "d098431d-4828-45be-9487-079bb54644e6", 00:24:58.067 "is_configured": true, 00:24:58.067 "data_offset": 0, 00:24:58.067 "data_size": 65536 00:24:58.067 }, 00:24:58.067 { 00:24:58.067 "name": "BaseBdev4", 00:24:58.067 "uuid": "896391a4-dff0-4563-89f9-5945cecec690", 00:24:58.067 "is_configured": true, 00:24:58.067 "data_offset": 0, 00:24:58.067 "data_size": 65536 00:24:58.067 } 00:24:58.067 ] 00:24:58.067 }' 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.067 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.634 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.634 [2024-11-20 07:22:22.864033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 07:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 [2024-11-20 07:22:23.013046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.893 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 [2024-11-20 07:22:23.158047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:58.893 [2024-11-20 07:22:23.158273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 BaseBdev2 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 [ 00:24:59.152 { 00:24:59.152 "name": "BaseBdev2", 00:24:59.152 "aliases": [ 00:24:59.152 "4e100480-bbdd-4cb4-a772-c2478b2365a3" 00:24:59.152 ], 00:24:59.152 "product_name": "Malloc disk", 00:24:59.152 "block_size": 512, 00:24:59.152 "num_blocks": 65536, 00:24:59.152 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:24:59.152 "assigned_rate_limits": { 00:24:59.152 "rw_ios_per_sec": 0, 00:24:59.152 "rw_mbytes_per_sec": 0, 00:24:59.152 "r_mbytes_per_sec": 0, 00:24:59.152 "w_mbytes_per_sec": 0 00:24:59.152 }, 00:24:59.152 "claimed": false, 00:24:59.152 "zoned": false, 00:24:59.152 "supported_io_types": { 00:24:59.152 "read": true, 00:24:59.152 "write": true, 00:24:59.152 "unmap": true, 00:24:59.152 "flush": true, 00:24:59.152 "reset": true, 00:24:59.152 "nvme_admin": false, 00:24:59.152 "nvme_io": false, 00:24:59.152 "nvme_io_md": false, 00:24:59.152 "write_zeroes": true, 00:24:59.152 "zcopy": true, 00:24:59.152 "get_zone_info": false, 00:24:59.152 "zone_management": false, 00:24:59.152 "zone_append": false, 00:24:59.152 "compare": false, 00:24:59.152 "compare_and_write": false, 00:24:59.152 "abort": true, 00:24:59.152 "seek_hole": false, 00:24:59.152 "seek_data": false, 00:24:59.152 "copy": true, 00:24:59.152 "nvme_iov_md": false 00:24:59.152 }, 00:24:59.152 "memory_domains": [ 00:24:59.152 { 00:24:59.152 "dma_device_id": "system", 00:24:59.152 "dma_device_type": 1 00:24:59.152 }, 00:24:59.152 { 00:24:59.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.152 "dma_device_type": 2 00:24:59.152 } 00:24:59.152 ], 00:24:59.152 "driver_specific": {} 00:24:59.152 } 00:24:59.152 ] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:59.152 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.153 BaseBdev3 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.153 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.153 [ 00:24:59.153 { 00:24:59.153 "name": "BaseBdev3", 00:24:59.153 "aliases": [ 00:24:59.153 "604e1131-21d6-47df-8ede-743b470c9ba9" 00:24:59.153 ], 00:24:59.153 "product_name": "Malloc disk", 00:24:59.153 "block_size": 512, 00:24:59.153 "num_blocks": 65536, 00:24:59.153 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:24:59.153 "assigned_rate_limits": { 00:24:59.153 "rw_ios_per_sec": 0, 00:24:59.153 "rw_mbytes_per_sec": 0, 00:24:59.153 "r_mbytes_per_sec": 0, 00:24:59.153 "w_mbytes_per_sec": 0 00:24:59.153 }, 00:24:59.153 "claimed": false, 00:24:59.153 "zoned": false, 00:24:59.153 "supported_io_types": { 00:24:59.153 "read": true, 00:24:59.153 "write": true, 00:24:59.153 "unmap": true, 00:24:59.153 "flush": true, 00:24:59.153 "reset": true, 00:24:59.153 "nvme_admin": false, 00:24:59.153 "nvme_io": false, 00:24:59.153 "nvme_io_md": false, 00:24:59.153 "write_zeroes": true, 00:24:59.153 "zcopy": true, 00:24:59.153 "get_zone_info": false, 00:24:59.153 "zone_management": false, 00:24:59.153 "zone_append": false, 00:24:59.153 "compare": false, 00:24:59.153 "compare_and_write": false, 00:24:59.412 "abort": true, 00:24:59.412 "seek_hole": false, 00:24:59.412 "seek_data": false, 00:24:59.412 "copy": true, 00:24:59.412 "nvme_iov_md": false 00:24:59.412 }, 00:24:59.412 "memory_domains": [ 00:24:59.412 { 00:24:59.412 "dma_device_id": "system", 00:24:59.412 "dma_device_type": 1 00:24:59.412 }, 00:24:59.412 { 00:24:59.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.412 "dma_device_type": 2 00:24:59.412 } 00:24:59.412 ], 00:24:59.412 "driver_specific": {} 00:24:59.412 } 00:24:59.412 ] 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.412 BaseBdev4 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.412 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.412 [ 00:24:59.412 { 00:24:59.412 "name": "BaseBdev4", 00:24:59.412 "aliases": [ 00:24:59.412 "f1b90df8-b050-4318-be94-05227d70dc59" 00:24:59.412 ], 00:24:59.412 "product_name": "Malloc disk", 00:24:59.412 "block_size": 512, 00:24:59.412 "num_blocks": 65536, 00:24:59.412 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:24:59.412 "assigned_rate_limits": { 00:24:59.412 "rw_ios_per_sec": 0, 00:24:59.412 "rw_mbytes_per_sec": 0, 00:24:59.412 "r_mbytes_per_sec": 0, 00:24:59.412 "w_mbytes_per_sec": 0 00:24:59.412 }, 00:24:59.412 "claimed": false, 00:24:59.412 "zoned": false, 00:24:59.412 "supported_io_types": { 00:24:59.412 "read": true, 00:24:59.412 "write": true, 00:24:59.412 "unmap": true, 00:24:59.412 "flush": true, 00:24:59.412 "reset": true, 00:24:59.412 "nvme_admin": false, 00:24:59.412 "nvme_io": false, 00:24:59.412 "nvme_io_md": false, 00:24:59.412 "write_zeroes": true, 00:24:59.412 "zcopy": true, 00:24:59.412 "get_zone_info": false, 00:24:59.412 "zone_management": false, 00:24:59.412 "zone_append": false, 00:24:59.412 "compare": false, 00:24:59.412 "compare_and_write": false, 00:24:59.412 "abort": true, 00:24:59.412 "seek_hole": false, 00:24:59.412 "seek_data": false, 00:24:59.412 "copy": true, 00:24:59.412 "nvme_iov_md": false 00:24:59.412 }, 00:24:59.412 "memory_domains": [ 00:24:59.412 { 00:24:59.412 "dma_device_id": "system", 00:24:59.412 "dma_device_type": 1 00:24:59.412 }, 00:24:59.412 { 00:24:59.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.412 "dma_device_type": 2 00:24:59.412 } 00:24:59.412 ], 00:24:59.412 "driver_specific": {} 00:24:59.412 } 00:24:59.413 ] 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.413 [2024-11-20 07:22:23.522251] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:59.413 [2024-11-20 07:22:23.522444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:59.413 [2024-11-20 07:22:23.522620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:59.413 [2024-11-20 07:22:23.525176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:59.413 [2024-11-20 07:22:23.525397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.413 "name": "Existed_Raid", 00:24:59.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.413 "strip_size_kb": 64, 00:24:59.413 "state": "configuring", 00:24:59.413 "raid_level": "concat", 00:24:59.413 "superblock": false, 00:24:59.413 "num_base_bdevs": 4, 00:24:59.413 "num_base_bdevs_discovered": 3, 00:24:59.413 "num_base_bdevs_operational": 4, 00:24:59.413 "base_bdevs_list": [ 00:24:59.413 { 00:24:59.413 "name": "BaseBdev1", 00:24:59.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.413 "is_configured": false, 00:24:59.413 "data_offset": 0, 00:24:59.413 "data_size": 0 00:24:59.413 }, 00:24:59.413 { 00:24:59.413 "name": "BaseBdev2", 00:24:59.413 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:24:59.413 "is_configured": true, 00:24:59.413 "data_offset": 0, 00:24:59.413 "data_size": 65536 00:24:59.413 }, 00:24:59.413 { 00:24:59.413 "name": "BaseBdev3", 00:24:59.413 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:24:59.413 "is_configured": true, 00:24:59.413 "data_offset": 0, 00:24:59.413 "data_size": 65536 00:24:59.413 }, 00:24:59.413 { 00:24:59.413 "name": "BaseBdev4", 00:24:59.413 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:24:59.413 "is_configured": true, 00:24:59.413 "data_offset": 0, 00:24:59.413 "data_size": 65536 00:24:59.413 } 00:24:59.413 ] 00:24:59.413 }' 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.413 07:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.981 [2024-11-20 07:22:24.038413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.981 "name": "Existed_Raid", 00:24:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.981 "strip_size_kb": 64, 00:24:59.981 "state": "configuring", 00:24:59.981 "raid_level": "concat", 00:24:59.981 "superblock": false, 00:24:59.981 "num_base_bdevs": 4, 00:24:59.981 "num_base_bdevs_discovered": 2, 00:24:59.981 "num_base_bdevs_operational": 4, 00:24:59.981 "base_bdevs_list": [ 00:24:59.981 { 00:24:59.981 "name": "BaseBdev1", 00:24:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.981 "is_configured": false, 00:24:59.981 "data_offset": 0, 00:24:59.981 "data_size": 0 00:24:59.981 }, 00:24:59.981 { 00:24:59.981 "name": null, 00:24:59.981 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:24:59.981 "is_configured": false, 00:24:59.981 "data_offset": 0, 00:24:59.981 "data_size": 65536 00:24:59.981 }, 00:24:59.981 { 00:24:59.981 "name": "BaseBdev3", 00:24:59.981 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:24:59.981 "is_configured": true, 00:24:59.981 "data_offset": 0, 00:24:59.981 "data_size": 65536 00:24:59.981 }, 00:24:59.981 { 00:24:59.981 "name": "BaseBdev4", 00:24:59.981 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:24:59.981 "is_configured": true, 00:24:59.981 "data_offset": 0, 00:24:59.981 "data_size": 65536 00:24:59.981 } 00:24:59.981 ] 00:24:59.981 }' 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.981 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.548 [2024-11-20 07:22:24.631854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:00.548 BaseBdev1 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:00.548 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.549 [ 00:25:00.549 { 00:25:00.549 "name": "BaseBdev1", 00:25:00.549 "aliases": [ 00:25:00.549 "cfa17dd8-28f9-4897-a8cf-53dd88266786" 00:25:00.549 ], 00:25:00.549 "product_name": "Malloc disk", 00:25:00.549 "block_size": 512, 00:25:00.549 "num_blocks": 65536, 00:25:00.549 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:00.549 "assigned_rate_limits": { 00:25:00.549 "rw_ios_per_sec": 0, 00:25:00.549 "rw_mbytes_per_sec": 0, 00:25:00.549 "r_mbytes_per_sec": 0, 00:25:00.549 "w_mbytes_per_sec": 0 00:25:00.549 }, 00:25:00.549 "claimed": true, 00:25:00.549 "claim_type": "exclusive_write", 00:25:00.549 "zoned": false, 00:25:00.549 "supported_io_types": { 00:25:00.549 "read": true, 00:25:00.549 "write": true, 00:25:00.549 "unmap": true, 00:25:00.549 "flush": true, 00:25:00.549 "reset": true, 00:25:00.549 "nvme_admin": false, 00:25:00.549 "nvme_io": false, 00:25:00.549 "nvme_io_md": false, 00:25:00.549 "write_zeroes": true, 00:25:00.549 "zcopy": true, 00:25:00.549 "get_zone_info": false, 00:25:00.549 "zone_management": false, 00:25:00.549 "zone_append": false, 00:25:00.549 "compare": false, 00:25:00.549 "compare_and_write": false, 00:25:00.549 "abort": true, 00:25:00.549 "seek_hole": false, 00:25:00.549 "seek_data": false, 00:25:00.549 "copy": true, 00:25:00.549 "nvme_iov_md": false 00:25:00.549 }, 00:25:00.549 "memory_domains": [ 00:25:00.549 { 00:25:00.549 "dma_device_id": "system", 00:25:00.549 "dma_device_type": 1 00:25:00.549 }, 00:25:00.549 { 00:25:00.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.549 "dma_device_type": 2 00:25:00.549 } 00:25:00.549 ], 00:25:00.549 "driver_specific": {} 00:25:00.549 } 00:25:00.549 ] 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:00.549 "name": "Existed_Raid", 00:25:00.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.549 "strip_size_kb": 64, 00:25:00.549 "state": "configuring", 00:25:00.549 "raid_level": "concat", 00:25:00.549 "superblock": false, 00:25:00.549 "num_base_bdevs": 4, 00:25:00.549 "num_base_bdevs_discovered": 3, 00:25:00.549 "num_base_bdevs_operational": 4, 00:25:00.549 "base_bdevs_list": [ 00:25:00.549 { 00:25:00.549 "name": "BaseBdev1", 00:25:00.549 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:00.549 "is_configured": true, 00:25:00.549 "data_offset": 0, 00:25:00.549 "data_size": 65536 00:25:00.549 }, 00:25:00.549 { 00:25:00.549 "name": null, 00:25:00.549 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:00.549 "is_configured": false, 00:25:00.549 "data_offset": 0, 00:25:00.549 "data_size": 65536 00:25:00.549 }, 00:25:00.549 { 00:25:00.549 "name": "BaseBdev3", 00:25:00.549 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:00.549 "is_configured": true, 00:25:00.549 "data_offset": 0, 00:25:00.549 "data_size": 65536 00:25:00.549 }, 00:25:00.549 { 00:25:00.549 "name": "BaseBdev4", 00:25:00.549 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:00.549 "is_configured": true, 00:25:00.549 "data_offset": 0, 00:25:00.549 "data_size": 65536 00:25:00.549 } 00:25:00.549 ] 00:25:00.549 }' 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:00.549 07:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.120 [2024-11-20 07:22:25.256172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.120 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.121 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.121 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.121 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.121 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.121 "name": "Existed_Raid", 00:25:01.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.121 "strip_size_kb": 64, 00:25:01.121 "state": "configuring", 00:25:01.121 "raid_level": "concat", 00:25:01.121 "superblock": false, 00:25:01.121 "num_base_bdevs": 4, 00:25:01.121 "num_base_bdevs_discovered": 2, 00:25:01.121 "num_base_bdevs_operational": 4, 00:25:01.121 "base_bdevs_list": [ 00:25:01.121 { 00:25:01.121 "name": "BaseBdev1", 00:25:01.121 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:01.121 "is_configured": true, 00:25:01.121 "data_offset": 0, 00:25:01.121 "data_size": 65536 00:25:01.121 }, 00:25:01.121 { 00:25:01.121 "name": null, 00:25:01.121 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:01.121 "is_configured": false, 00:25:01.121 "data_offset": 0, 00:25:01.121 "data_size": 65536 00:25:01.121 }, 00:25:01.121 { 00:25:01.121 "name": null, 00:25:01.121 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:01.121 "is_configured": false, 00:25:01.121 "data_offset": 0, 00:25:01.121 "data_size": 65536 00:25:01.121 }, 00:25:01.121 { 00:25:01.121 "name": "BaseBdev4", 00:25:01.121 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:01.121 "is_configured": true, 00:25:01.121 "data_offset": 0, 00:25:01.121 "data_size": 65536 00:25:01.121 } 00:25:01.121 ] 00:25:01.121 }' 00:25:01.121 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.121 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 [2024-11-20 07:22:25.856341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.694 "name": "Existed_Raid", 00:25:01.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.694 "strip_size_kb": 64, 00:25:01.694 "state": "configuring", 00:25:01.694 "raid_level": "concat", 00:25:01.694 "superblock": false, 00:25:01.694 "num_base_bdevs": 4, 00:25:01.694 "num_base_bdevs_discovered": 3, 00:25:01.694 "num_base_bdevs_operational": 4, 00:25:01.694 "base_bdevs_list": [ 00:25:01.694 { 00:25:01.694 "name": "BaseBdev1", 00:25:01.694 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:01.694 "is_configured": true, 00:25:01.694 "data_offset": 0, 00:25:01.694 "data_size": 65536 00:25:01.694 }, 00:25:01.694 { 00:25:01.694 "name": null, 00:25:01.694 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:01.694 "is_configured": false, 00:25:01.694 "data_offset": 0, 00:25:01.694 "data_size": 65536 00:25:01.694 }, 00:25:01.694 { 00:25:01.694 "name": "BaseBdev3", 00:25:01.694 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:01.694 "is_configured": true, 00:25:01.694 "data_offset": 0, 00:25:01.694 "data_size": 65536 00:25:01.694 }, 00:25:01.694 { 00:25:01.694 "name": "BaseBdev4", 00:25:01.694 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:01.694 "is_configured": true, 00:25:01.694 "data_offset": 0, 00:25:01.694 "data_size": 65536 00:25:01.694 } 00:25:01.694 ] 00:25:01.694 }' 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.694 07:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.262 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.263 [2024-11-20 07:22:26.416616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.263 "name": "Existed_Raid", 00:25:02.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.263 "strip_size_kb": 64, 00:25:02.263 "state": "configuring", 00:25:02.263 "raid_level": "concat", 00:25:02.263 "superblock": false, 00:25:02.263 "num_base_bdevs": 4, 00:25:02.263 "num_base_bdevs_discovered": 2, 00:25:02.263 "num_base_bdevs_operational": 4, 00:25:02.263 "base_bdevs_list": [ 00:25:02.263 { 00:25:02.263 "name": null, 00:25:02.263 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:02.263 "is_configured": false, 00:25:02.263 "data_offset": 0, 00:25:02.263 "data_size": 65536 00:25:02.263 }, 00:25:02.263 { 00:25:02.263 "name": null, 00:25:02.263 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:02.263 "is_configured": false, 00:25:02.263 "data_offset": 0, 00:25:02.263 "data_size": 65536 00:25:02.263 }, 00:25:02.263 { 00:25:02.263 "name": "BaseBdev3", 00:25:02.263 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:02.263 "is_configured": true, 00:25:02.263 "data_offset": 0, 00:25:02.263 "data_size": 65536 00:25:02.263 }, 00:25:02.263 { 00:25:02.263 "name": "BaseBdev4", 00:25:02.263 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:02.263 "is_configured": true, 00:25:02.263 "data_offset": 0, 00:25:02.263 "data_size": 65536 00:25:02.263 } 00:25:02.263 ] 00:25:02.263 }' 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.263 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.832 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:02.832 07:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.832 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.832 07:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.832 [2024-11-20 07:22:27.035228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.832 "name": "Existed_Raid", 00:25:02.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.832 "strip_size_kb": 64, 00:25:02.832 "state": "configuring", 00:25:02.832 "raid_level": "concat", 00:25:02.832 "superblock": false, 00:25:02.832 "num_base_bdevs": 4, 00:25:02.832 "num_base_bdevs_discovered": 3, 00:25:02.832 "num_base_bdevs_operational": 4, 00:25:02.832 "base_bdevs_list": [ 00:25:02.832 { 00:25:02.832 "name": null, 00:25:02.832 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:02.832 "is_configured": false, 00:25:02.832 "data_offset": 0, 00:25:02.832 "data_size": 65536 00:25:02.832 }, 00:25:02.832 { 00:25:02.832 "name": "BaseBdev2", 00:25:02.832 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:02.832 "is_configured": true, 00:25:02.832 "data_offset": 0, 00:25:02.832 "data_size": 65536 00:25:02.832 }, 00:25:02.832 { 00:25:02.832 "name": "BaseBdev3", 00:25:02.832 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:02.832 "is_configured": true, 00:25:02.832 "data_offset": 0, 00:25:02.832 "data_size": 65536 00:25:02.832 }, 00:25:02.832 { 00:25:02.832 "name": "BaseBdev4", 00:25:02.832 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:02.832 "is_configured": true, 00:25:02.832 "data_offset": 0, 00:25:02.832 "data_size": 65536 00:25:02.832 } 00:25:02.832 ] 00:25:02.832 }' 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.832 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:03.400 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cfa17dd8-28f9-4897-a8cf-53dd88266786 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.401 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.663 [2024-11-20 07:22:27.717140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:03.663 [2024-11-20 07:22:27.717457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:03.663 [2024-11-20 07:22:27.717486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:03.663 [2024-11-20 07:22:27.717913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:03.663 [2024-11-20 07:22:27.718180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:03.663 [2024-11-20 07:22:27.718201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:03.663 [2024-11-20 07:22:27.718511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.663 NewBaseBdev 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.663 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.663 [ 00:25:03.663 { 00:25:03.663 "name": "NewBaseBdev", 00:25:03.663 "aliases": [ 00:25:03.663 "cfa17dd8-28f9-4897-a8cf-53dd88266786" 00:25:03.663 ], 00:25:03.663 "product_name": "Malloc disk", 00:25:03.663 "block_size": 512, 00:25:03.663 "num_blocks": 65536, 00:25:03.663 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:03.663 "assigned_rate_limits": { 00:25:03.663 "rw_ios_per_sec": 0, 00:25:03.663 "rw_mbytes_per_sec": 0, 00:25:03.663 "r_mbytes_per_sec": 0, 00:25:03.663 "w_mbytes_per_sec": 0 00:25:03.663 }, 00:25:03.663 "claimed": true, 00:25:03.663 "claim_type": "exclusive_write", 00:25:03.663 "zoned": false, 00:25:03.663 "supported_io_types": { 00:25:03.663 "read": true, 00:25:03.664 "write": true, 00:25:03.664 "unmap": true, 00:25:03.664 "flush": true, 00:25:03.664 "reset": true, 00:25:03.664 "nvme_admin": false, 00:25:03.664 "nvme_io": false, 00:25:03.664 "nvme_io_md": false, 00:25:03.664 "write_zeroes": true, 00:25:03.664 "zcopy": true, 00:25:03.664 "get_zone_info": false, 00:25:03.664 "zone_management": false, 00:25:03.664 "zone_append": false, 00:25:03.664 "compare": false, 00:25:03.664 "compare_and_write": false, 00:25:03.664 "abort": true, 00:25:03.664 "seek_hole": false, 00:25:03.664 "seek_data": false, 00:25:03.664 "copy": true, 00:25:03.664 "nvme_iov_md": false 00:25:03.664 }, 00:25:03.664 "memory_domains": [ 00:25:03.664 { 00:25:03.664 "dma_device_id": "system", 00:25:03.664 "dma_device_type": 1 00:25:03.664 }, 00:25:03.664 { 00:25:03.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.664 "dma_device_type": 2 00:25:03.664 } 00:25:03.664 ], 00:25:03.664 "driver_specific": {} 00:25:03.664 } 00:25:03.664 ] 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.664 "name": "Existed_Raid", 00:25:03.664 "uuid": "f4c6ad11-b0d7-47ef-b247-d3e89633c5ce", 00:25:03.664 "strip_size_kb": 64, 00:25:03.664 "state": "online", 00:25:03.664 "raid_level": "concat", 00:25:03.664 "superblock": false, 00:25:03.664 "num_base_bdevs": 4, 00:25:03.664 "num_base_bdevs_discovered": 4, 00:25:03.664 "num_base_bdevs_operational": 4, 00:25:03.664 "base_bdevs_list": [ 00:25:03.664 { 00:25:03.664 "name": "NewBaseBdev", 00:25:03.664 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:03.664 "is_configured": true, 00:25:03.664 "data_offset": 0, 00:25:03.664 "data_size": 65536 00:25:03.664 }, 00:25:03.664 { 00:25:03.664 "name": "BaseBdev2", 00:25:03.664 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:03.664 "is_configured": true, 00:25:03.664 "data_offset": 0, 00:25:03.664 "data_size": 65536 00:25:03.664 }, 00:25:03.664 { 00:25:03.664 "name": "BaseBdev3", 00:25:03.664 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:03.664 "is_configured": true, 00:25:03.664 "data_offset": 0, 00:25:03.664 "data_size": 65536 00:25:03.664 }, 00:25:03.664 { 00:25:03.664 "name": "BaseBdev4", 00:25:03.664 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:03.664 "is_configured": true, 00:25:03.664 "data_offset": 0, 00:25:03.664 "data_size": 65536 00:25:03.664 } 00:25:03.664 ] 00:25:03.664 }' 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.664 07:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.231 [2024-11-20 07:22:28.289905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.231 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.231 "name": "Existed_Raid", 00:25:04.231 "aliases": [ 00:25:04.231 "f4c6ad11-b0d7-47ef-b247-d3e89633c5ce" 00:25:04.231 ], 00:25:04.231 "product_name": "Raid Volume", 00:25:04.231 "block_size": 512, 00:25:04.231 "num_blocks": 262144, 00:25:04.231 "uuid": "f4c6ad11-b0d7-47ef-b247-d3e89633c5ce", 00:25:04.231 "assigned_rate_limits": { 00:25:04.231 "rw_ios_per_sec": 0, 00:25:04.231 "rw_mbytes_per_sec": 0, 00:25:04.231 "r_mbytes_per_sec": 0, 00:25:04.231 "w_mbytes_per_sec": 0 00:25:04.231 }, 00:25:04.231 "claimed": false, 00:25:04.231 "zoned": false, 00:25:04.231 "supported_io_types": { 00:25:04.231 "read": true, 00:25:04.231 "write": true, 00:25:04.231 "unmap": true, 00:25:04.231 "flush": true, 00:25:04.231 "reset": true, 00:25:04.231 "nvme_admin": false, 00:25:04.231 "nvme_io": false, 00:25:04.231 "nvme_io_md": false, 00:25:04.231 "write_zeroes": true, 00:25:04.231 "zcopy": false, 00:25:04.231 "get_zone_info": false, 00:25:04.231 "zone_management": false, 00:25:04.231 "zone_append": false, 00:25:04.231 "compare": false, 00:25:04.231 "compare_and_write": false, 00:25:04.231 "abort": false, 00:25:04.231 "seek_hole": false, 00:25:04.231 "seek_data": false, 00:25:04.231 "copy": false, 00:25:04.231 "nvme_iov_md": false 00:25:04.231 }, 00:25:04.231 "memory_domains": [ 00:25:04.231 { 00:25:04.231 "dma_device_id": "system", 00:25:04.232 "dma_device_type": 1 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.232 "dma_device_type": 2 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "system", 00:25:04.232 "dma_device_type": 1 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.232 "dma_device_type": 2 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "system", 00:25:04.232 "dma_device_type": 1 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.232 "dma_device_type": 2 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "system", 00:25:04.232 "dma_device_type": 1 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.232 "dma_device_type": 2 00:25:04.232 } 00:25:04.232 ], 00:25:04.232 "driver_specific": { 00:25:04.232 "raid": { 00:25:04.232 "uuid": "f4c6ad11-b0d7-47ef-b247-d3e89633c5ce", 00:25:04.232 "strip_size_kb": 64, 00:25:04.232 "state": "online", 00:25:04.232 "raid_level": "concat", 00:25:04.232 "superblock": false, 00:25:04.232 "num_base_bdevs": 4, 00:25:04.232 "num_base_bdevs_discovered": 4, 00:25:04.232 "num_base_bdevs_operational": 4, 00:25:04.232 "base_bdevs_list": [ 00:25:04.232 { 00:25:04.232 "name": "NewBaseBdev", 00:25:04.232 "uuid": "cfa17dd8-28f9-4897-a8cf-53dd88266786", 00:25:04.232 "is_configured": true, 00:25:04.232 "data_offset": 0, 00:25:04.232 "data_size": 65536 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "name": "BaseBdev2", 00:25:04.232 "uuid": "4e100480-bbdd-4cb4-a772-c2478b2365a3", 00:25:04.232 "is_configured": true, 00:25:04.232 "data_offset": 0, 00:25:04.232 "data_size": 65536 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "name": "BaseBdev3", 00:25:04.232 "uuid": "604e1131-21d6-47df-8ede-743b470c9ba9", 00:25:04.232 "is_configured": true, 00:25:04.232 "data_offset": 0, 00:25:04.232 "data_size": 65536 00:25:04.232 }, 00:25:04.232 { 00:25:04.232 "name": "BaseBdev4", 00:25:04.232 "uuid": "f1b90df8-b050-4318-be94-05227d70dc59", 00:25:04.232 "is_configured": true, 00:25:04.232 "data_offset": 0, 00:25:04.232 "data_size": 65536 00:25:04.232 } 00:25:04.232 ] 00:25:04.232 } 00:25:04.232 } 00:25:04.232 }' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:04.232 BaseBdev2 00:25:04.232 BaseBdev3 00:25:04.232 BaseBdev4' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.232 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.491 [2024-11-20 07:22:28.657530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:04.491 [2024-11-20 07:22:28.657756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.491 [2024-11-20 07:22:28.657971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.491 [2024-11-20 07:22:28.658175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.491 [2024-11-20 07:22:28.658295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71600 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71600 ']' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71600 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71600 00:25:04.491 killing process with pid 71600 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71600' 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71600 00:25:04.491 [2024-11-20 07:22:28.700166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.491 07:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71600 00:25:05.059 [2024-11-20 07:22:29.040884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:05.999 00:25:05.999 real 0m12.903s 00:25:05.999 user 0m21.540s 00:25:05.999 sys 0m1.760s 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 ************************************ 00:25:05.999 END TEST raid_state_function_test 00:25:05.999 ************************************ 00:25:05.999 07:22:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:25:05.999 07:22:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:05.999 07:22:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.999 07:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 ************************************ 00:25:05.999 START TEST raid_state_function_test_sb 00:25:05.999 ************************************ 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:05.999 Process raid pid: 72282 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72282 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72282' 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72282 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72282 ']' 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.999 07:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 [2024-11-20 07:22:30.195101] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:05.999 [2024-11-20 07:22:30.195552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.264 [2024-11-20 07:22:30.364557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.264 [2024-11-20 07:22:30.498438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.522 [2024-11-20 07:22:30.698478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.522 [2024-11-20 07:22:30.698521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.090 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.091 [2024-11-20 07:22:31.123002] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.091 [2024-11-20 07:22:31.123229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.091 [2024-11-20 07:22:31.123353] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.091 [2024-11-20 07:22:31.123413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.091 [2024-11-20 07:22:31.123512] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:07.091 [2024-11-20 07:22:31.123682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:07.091 [2024-11-20 07:22:31.123806] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:07.091 [2024-11-20 07:22:31.123868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.091 "name": "Existed_Raid", 00:25:07.091 "uuid": "09834e75-5e45-4b46-8ec9-d867a25ef854", 00:25:07.091 "strip_size_kb": 64, 00:25:07.091 "state": "configuring", 00:25:07.091 "raid_level": "concat", 00:25:07.091 "superblock": true, 00:25:07.091 "num_base_bdevs": 4, 00:25:07.091 "num_base_bdevs_discovered": 0, 00:25:07.091 "num_base_bdevs_operational": 4, 00:25:07.091 "base_bdevs_list": [ 00:25:07.091 { 00:25:07.091 "name": "BaseBdev1", 00:25:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.091 "is_configured": false, 00:25:07.091 "data_offset": 0, 00:25:07.091 "data_size": 0 00:25:07.091 }, 00:25:07.091 { 00:25:07.091 "name": "BaseBdev2", 00:25:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.091 "is_configured": false, 00:25:07.091 "data_offset": 0, 00:25:07.091 "data_size": 0 00:25:07.091 }, 00:25:07.091 { 00:25:07.091 "name": "BaseBdev3", 00:25:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.091 "is_configured": false, 00:25:07.091 "data_offset": 0, 00:25:07.091 "data_size": 0 00:25:07.091 }, 00:25:07.091 { 00:25:07.091 "name": "BaseBdev4", 00:25:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.091 "is_configured": false, 00:25:07.091 "data_offset": 0, 00:25:07.091 "data_size": 0 00:25:07.091 } 00:25:07.091 ] 00:25:07.091 }' 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.091 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 [2024-11-20 07:22:31.679047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.659 [2024-11-20 07:22:31.679215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 [2024-11-20 07:22:31.687045] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.659 [2024-11-20 07:22:31.687270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.659 [2024-11-20 07:22:31.687390] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.659 [2024-11-20 07:22:31.687519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.659 [2024-11-20 07:22:31.687646] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:07.659 [2024-11-20 07:22:31.687786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:07.659 [2024-11-20 07:22:31.687809] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:07.659 [2024-11-20 07:22:31.687826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 [2024-11-20 07:22:31.732461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.659 BaseBdev1 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 [ 00:25:07.659 { 00:25:07.659 "name": "BaseBdev1", 00:25:07.659 "aliases": [ 00:25:07.659 "87b39570-fc79-4050-9132-3d08dfd23d9d" 00:25:07.659 ], 00:25:07.659 "product_name": "Malloc disk", 00:25:07.659 "block_size": 512, 00:25:07.659 "num_blocks": 65536, 00:25:07.659 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:07.659 "assigned_rate_limits": { 00:25:07.659 "rw_ios_per_sec": 0, 00:25:07.659 "rw_mbytes_per_sec": 0, 00:25:07.659 "r_mbytes_per_sec": 0, 00:25:07.659 "w_mbytes_per_sec": 0 00:25:07.659 }, 00:25:07.659 "claimed": true, 00:25:07.659 "claim_type": "exclusive_write", 00:25:07.659 "zoned": false, 00:25:07.659 "supported_io_types": { 00:25:07.659 "read": true, 00:25:07.659 "write": true, 00:25:07.659 "unmap": true, 00:25:07.659 "flush": true, 00:25:07.659 "reset": true, 00:25:07.659 "nvme_admin": false, 00:25:07.659 "nvme_io": false, 00:25:07.659 "nvme_io_md": false, 00:25:07.659 "write_zeroes": true, 00:25:07.659 "zcopy": true, 00:25:07.659 "get_zone_info": false, 00:25:07.659 "zone_management": false, 00:25:07.659 "zone_append": false, 00:25:07.659 "compare": false, 00:25:07.659 "compare_and_write": false, 00:25:07.659 "abort": true, 00:25:07.659 "seek_hole": false, 00:25:07.659 "seek_data": false, 00:25:07.659 "copy": true, 00:25:07.659 "nvme_iov_md": false 00:25:07.659 }, 00:25:07.659 "memory_domains": [ 00:25:07.659 { 00:25:07.659 "dma_device_id": "system", 00:25:07.659 "dma_device_type": 1 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.659 "dma_device_type": 2 00:25:07.659 } 00:25:07.659 ], 00:25:07.659 "driver_specific": {} 00:25:07.659 } 00:25:07.659 ] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.659 "name": "Existed_Raid", 00:25:07.659 "uuid": "8e9478d3-01e9-4dcd-be22-c9a6150b3acf", 00:25:07.659 "strip_size_kb": 64, 00:25:07.659 "state": "configuring", 00:25:07.659 "raid_level": "concat", 00:25:07.659 "superblock": true, 00:25:07.659 "num_base_bdevs": 4, 00:25:07.659 "num_base_bdevs_discovered": 1, 00:25:07.659 "num_base_bdevs_operational": 4, 00:25:07.659 "base_bdevs_list": [ 00:25:07.659 { 00:25:07.659 "name": "BaseBdev1", 00:25:07.659 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:07.659 "is_configured": true, 00:25:07.659 "data_offset": 2048, 00:25:07.659 "data_size": 63488 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "name": "BaseBdev2", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "is_configured": false, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 0 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "name": "BaseBdev3", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "is_configured": false, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 0 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "name": "BaseBdev4", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "is_configured": false, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 0 00:25:07.659 } 00:25:07.659 ] 00:25:07.659 }' 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.659 07:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.228 [2024-11-20 07:22:32.308698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:08.228 [2024-11-20 07:22:32.308873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.228 [2024-11-20 07:22:32.316773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.228 [2024-11-20 07:22:32.319394] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:08.228 [2024-11-20 07:22:32.319615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:08.228 [2024-11-20 07:22:32.319727] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:08.228 [2024-11-20 07:22:32.319886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:08.228 [2024-11-20 07:22:32.319989] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:08.228 [2024-11-20 07:22:32.320103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.228 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.228 "name": "Existed_Raid", 00:25:08.228 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:08.228 "strip_size_kb": 64, 00:25:08.228 "state": "configuring", 00:25:08.229 "raid_level": "concat", 00:25:08.229 "superblock": true, 00:25:08.229 "num_base_bdevs": 4, 00:25:08.229 "num_base_bdevs_discovered": 1, 00:25:08.229 "num_base_bdevs_operational": 4, 00:25:08.229 "base_bdevs_list": [ 00:25:08.229 { 00:25:08.229 "name": "BaseBdev1", 00:25:08.229 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:08.229 "is_configured": true, 00:25:08.229 "data_offset": 2048, 00:25:08.229 "data_size": 63488 00:25:08.229 }, 00:25:08.229 { 00:25:08.229 "name": "BaseBdev2", 00:25:08.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.229 "is_configured": false, 00:25:08.229 "data_offset": 0, 00:25:08.229 "data_size": 0 00:25:08.229 }, 00:25:08.229 { 00:25:08.229 "name": "BaseBdev3", 00:25:08.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.229 "is_configured": false, 00:25:08.229 "data_offset": 0, 00:25:08.229 "data_size": 0 00:25:08.229 }, 00:25:08.229 { 00:25:08.229 "name": "BaseBdev4", 00:25:08.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.229 "is_configured": false, 00:25:08.229 "data_offset": 0, 00:25:08.229 "data_size": 0 00:25:08.229 } 00:25:08.229 ] 00:25:08.229 }' 00:25:08.229 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.229 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.796 [2024-11-20 07:22:32.892188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.796 BaseBdev2 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.796 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.796 [ 00:25:08.796 { 00:25:08.796 "name": "BaseBdev2", 00:25:08.797 "aliases": [ 00:25:08.797 "9a3d36b2-6479-4551-b611-2f2b523d923d" 00:25:08.797 ], 00:25:08.797 "product_name": "Malloc disk", 00:25:08.797 "block_size": 512, 00:25:08.797 "num_blocks": 65536, 00:25:08.797 "uuid": "9a3d36b2-6479-4551-b611-2f2b523d923d", 00:25:08.797 "assigned_rate_limits": { 00:25:08.797 "rw_ios_per_sec": 0, 00:25:08.797 "rw_mbytes_per_sec": 0, 00:25:08.797 "r_mbytes_per_sec": 0, 00:25:08.797 "w_mbytes_per_sec": 0 00:25:08.797 }, 00:25:08.797 "claimed": true, 00:25:08.797 "claim_type": "exclusive_write", 00:25:08.797 "zoned": false, 00:25:08.797 "supported_io_types": { 00:25:08.797 "read": true, 00:25:08.797 "write": true, 00:25:08.797 "unmap": true, 00:25:08.797 "flush": true, 00:25:08.797 "reset": true, 00:25:08.797 "nvme_admin": false, 00:25:08.797 "nvme_io": false, 00:25:08.797 "nvme_io_md": false, 00:25:08.797 "write_zeroes": true, 00:25:08.797 "zcopy": true, 00:25:08.797 "get_zone_info": false, 00:25:08.797 "zone_management": false, 00:25:08.797 "zone_append": false, 00:25:08.797 "compare": false, 00:25:08.797 "compare_and_write": false, 00:25:08.797 "abort": true, 00:25:08.797 "seek_hole": false, 00:25:08.797 "seek_data": false, 00:25:08.797 "copy": true, 00:25:08.797 "nvme_iov_md": false 00:25:08.797 }, 00:25:08.797 "memory_domains": [ 00:25:08.797 { 00:25:08.797 "dma_device_id": "system", 00:25:08.797 "dma_device_type": 1 00:25:08.797 }, 00:25:08.797 { 00:25:08.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.797 "dma_device_type": 2 00:25:08.797 } 00:25:08.797 ], 00:25:08.797 "driver_specific": {} 00:25:08.797 } 00:25:08.797 ] 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.797 "name": "Existed_Raid", 00:25:08.797 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:08.797 "strip_size_kb": 64, 00:25:08.797 "state": "configuring", 00:25:08.797 "raid_level": "concat", 00:25:08.797 "superblock": true, 00:25:08.797 "num_base_bdevs": 4, 00:25:08.797 "num_base_bdevs_discovered": 2, 00:25:08.797 "num_base_bdevs_operational": 4, 00:25:08.797 "base_bdevs_list": [ 00:25:08.797 { 00:25:08.797 "name": "BaseBdev1", 00:25:08.797 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:08.797 "is_configured": true, 00:25:08.797 "data_offset": 2048, 00:25:08.797 "data_size": 63488 00:25:08.797 }, 00:25:08.797 { 00:25:08.797 "name": "BaseBdev2", 00:25:08.797 "uuid": "9a3d36b2-6479-4551-b611-2f2b523d923d", 00:25:08.797 "is_configured": true, 00:25:08.797 "data_offset": 2048, 00:25:08.797 "data_size": 63488 00:25:08.797 }, 00:25:08.797 { 00:25:08.797 "name": "BaseBdev3", 00:25:08.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.797 "is_configured": false, 00:25:08.797 "data_offset": 0, 00:25:08.797 "data_size": 0 00:25:08.797 }, 00:25:08.797 { 00:25:08.797 "name": "BaseBdev4", 00:25:08.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.797 "is_configured": false, 00:25:08.797 "data_offset": 0, 00:25:08.797 "data_size": 0 00:25:08.797 } 00:25:08.797 ] 00:25:08.797 }' 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.797 07:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.365 [2024-11-20 07:22:33.485117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.365 BaseBdev3 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.365 [ 00:25:09.365 { 00:25:09.365 "name": "BaseBdev3", 00:25:09.365 "aliases": [ 00:25:09.365 "dc0a1690-760a-44e2-a925-55094a9d2c13" 00:25:09.365 ], 00:25:09.365 "product_name": "Malloc disk", 00:25:09.365 "block_size": 512, 00:25:09.365 "num_blocks": 65536, 00:25:09.365 "uuid": "dc0a1690-760a-44e2-a925-55094a9d2c13", 00:25:09.365 "assigned_rate_limits": { 00:25:09.365 "rw_ios_per_sec": 0, 00:25:09.365 "rw_mbytes_per_sec": 0, 00:25:09.365 "r_mbytes_per_sec": 0, 00:25:09.365 "w_mbytes_per_sec": 0 00:25:09.365 }, 00:25:09.365 "claimed": true, 00:25:09.365 "claim_type": "exclusive_write", 00:25:09.365 "zoned": false, 00:25:09.365 "supported_io_types": { 00:25:09.365 "read": true, 00:25:09.365 "write": true, 00:25:09.365 "unmap": true, 00:25:09.365 "flush": true, 00:25:09.365 "reset": true, 00:25:09.365 "nvme_admin": false, 00:25:09.365 "nvme_io": false, 00:25:09.365 "nvme_io_md": false, 00:25:09.365 "write_zeroes": true, 00:25:09.365 "zcopy": true, 00:25:09.365 "get_zone_info": false, 00:25:09.365 "zone_management": false, 00:25:09.365 "zone_append": false, 00:25:09.365 "compare": false, 00:25:09.365 "compare_and_write": false, 00:25:09.365 "abort": true, 00:25:09.365 "seek_hole": false, 00:25:09.365 "seek_data": false, 00:25:09.365 "copy": true, 00:25:09.365 "nvme_iov_md": false 00:25:09.365 }, 00:25:09.365 "memory_domains": [ 00:25:09.365 { 00:25:09.365 "dma_device_id": "system", 00:25:09.365 "dma_device_type": 1 00:25:09.365 }, 00:25:09.365 { 00:25:09.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.365 "dma_device_type": 2 00:25:09.365 } 00:25:09.365 ], 00:25:09.365 "driver_specific": {} 00:25:09.365 } 00:25:09.365 ] 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:09.365 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.366 "name": "Existed_Raid", 00:25:09.366 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:09.366 "strip_size_kb": 64, 00:25:09.366 "state": "configuring", 00:25:09.366 "raid_level": "concat", 00:25:09.366 "superblock": true, 00:25:09.366 "num_base_bdevs": 4, 00:25:09.366 "num_base_bdevs_discovered": 3, 00:25:09.366 "num_base_bdevs_operational": 4, 00:25:09.366 "base_bdevs_list": [ 00:25:09.366 { 00:25:09.366 "name": "BaseBdev1", 00:25:09.366 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:09.366 "is_configured": true, 00:25:09.366 "data_offset": 2048, 00:25:09.366 "data_size": 63488 00:25:09.366 }, 00:25:09.366 { 00:25:09.366 "name": "BaseBdev2", 00:25:09.366 "uuid": "9a3d36b2-6479-4551-b611-2f2b523d923d", 00:25:09.366 "is_configured": true, 00:25:09.366 "data_offset": 2048, 00:25:09.366 "data_size": 63488 00:25:09.366 }, 00:25:09.366 { 00:25:09.366 "name": "BaseBdev3", 00:25:09.366 "uuid": "dc0a1690-760a-44e2-a925-55094a9d2c13", 00:25:09.366 "is_configured": true, 00:25:09.366 "data_offset": 2048, 00:25:09.366 "data_size": 63488 00:25:09.366 }, 00:25:09.366 { 00:25:09.366 "name": "BaseBdev4", 00:25:09.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.366 "is_configured": false, 00:25:09.366 "data_offset": 0, 00:25:09.366 "data_size": 0 00:25:09.366 } 00:25:09.366 ] 00:25:09.366 }' 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.366 07:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.935 [2024-11-20 07:22:34.085818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:09.935 [2024-11-20 07:22:34.086355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:09.935 [2024-11-20 07:22:34.086381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:09.935 [2024-11-20 07:22:34.086757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:09.935 BaseBdev4 00:25:09.935 [2024-11-20 07:22:34.086983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:09.935 [2024-11-20 07:22:34.087006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:09.935 [2024-11-20 07:22:34.087184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.935 [ 00:25:09.935 { 00:25:09.935 "name": "BaseBdev4", 00:25:09.935 "aliases": [ 00:25:09.935 "9fde9b01-a289-48c3-b27a-ee07557579a4" 00:25:09.935 ], 00:25:09.935 "product_name": "Malloc disk", 00:25:09.935 "block_size": 512, 00:25:09.935 "num_blocks": 65536, 00:25:09.935 "uuid": "9fde9b01-a289-48c3-b27a-ee07557579a4", 00:25:09.935 "assigned_rate_limits": { 00:25:09.935 "rw_ios_per_sec": 0, 00:25:09.935 "rw_mbytes_per_sec": 0, 00:25:09.935 "r_mbytes_per_sec": 0, 00:25:09.935 "w_mbytes_per_sec": 0 00:25:09.935 }, 00:25:09.935 "claimed": true, 00:25:09.935 "claim_type": "exclusive_write", 00:25:09.935 "zoned": false, 00:25:09.935 "supported_io_types": { 00:25:09.935 "read": true, 00:25:09.935 "write": true, 00:25:09.935 "unmap": true, 00:25:09.935 "flush": true, 00:25:09.935 "reset": true, 00:25:09.935 "nvme_admin": false, 00:25:09.935 "nvme_io": false, 00:25:09.935 "nvme_io_md": false, 00:25:09.935 "write_zeroes": true, 00:25:09.935 "zcopy": true, 00:25:09.935 "get_zone_info": false, 00:25:09.935 "zone_management": false, 00:25:09.935 "zone_append": false, 00:25:09.935 "compare": false, 00:25:09.935 "compare_and_write": false, 00:25:09.935 "abort": true, 00:25:09.935 "seek_hole": false, 00:25:09.935 "seek_data": false, 00:25:09.935 "copy": true, 00:25:09.935 "nvme_iov_md": false 00:25:09.935 }, 00:25:09.935 "memory_domains": [ 00:25:09.935 { 00:25:09.935 "dma_device_id": "system", 00:25:09.935 "dma_device_type": 1 00:25:09.935 }, 00:25:09.935 { 00:25:09.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.935 "dma_device_type": 2 00:25:09.935 } 00:25:09.935 ], 00:25:09.935 "driver_specific": {} 00:25:09.935 } 00:25:09.935 ] 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.935 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.935 "name": "Existed_Raid", 00:25:09.935 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:09.935 "strip_size_kb": 64, 00:25:09.935 "state": "online", 00:25:09.935 "raid_level": "concat", 00:25:09.935 "superblock": true, 00:25:09.935 "num_base_bdevs": 4, 00:25:09.935 "num_base_bdevs_discovered": 4, 00:25:09.935 "num_base_bdevs_operational": 4, 00:25:09.935 "base_bdevs_list": [ 00:25:09.935 { 00:25:09.935 "name": "BaseBdev1", 00:25:09.935 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:09.935 "is_configured": true, 00:25:09.935 "data_offset": 2048, 00:25:09.935 "data_size": 63488 00:25:09.935 }, 00:25:09.935 { 00:25:09.935 "name": "BaseBdev2", 00:25:09.935 "uuid": "9a3d36b2-6479-4551-b611-2f2b523d923d", 00:25:09.935 "is_configured": true, 00:25:09.935 "data_offset": 2048, 00:25:09.935 "data_size": 63488 00:25:09.935 }, 00:25:09.935 { 00:25:09.935 "name": "BaseBdev3", 00:25:09.935 "uuid": "dc0a1690-760a-44e2-a925-55094a9d2c13", 00:25:09.935 "is_configured": true, 00:25:09.935 "data_offset": 2048, 00:25:09.935 "data_size": 63488 00:25:09.936 }, 00:25:09.936 { 00:25:09.936 "name": "BaseBdev4", 00:25:09.936 "uuid": "9fde9b01-a289-48c3-b27a-ee07557579a4", 00:25:09.936 "is_configured": true, 00:25:09.936 "data_offset": 2048, 00:25:09.936 "data_size": 63488 00:25:09.936 } 00:25:09.936 ] 00:25:09.936 }' 00:25:09.936 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.936 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.503 [2024-11-20 07:22:34.646595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:10.503 "name": "Existed_Raid", 00:25:10.503 "aliases": [ 00:25:10.503 "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5" 00:25:10.503 ], 00:25:10.503 "product_name": "Raid Volume", 00:25:10.503 "block_size": 512, 00:25:10.503 "num_blocks": 253952, 00:25:10.503 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:10.503 "assigned_rate_limits": { 00:25:10.503 "rw_ios_per_sec": 0, 00:25:10.503 "rw_mbytes_per_sec": 0, 00:25:10.503 "r_mbytes_per_sec": 0, 00:25:10.503 "w_mbytes_per_sec": 0 00:25:10.503 }, 00:25:10.503 "claimed": false, 00:25:10.503 "zoned": false, 00:25:10.503 "supported_io_types": { 00:25:10.503 "read": true, 00:25:10.503 "write": true, 00:25:10.503 "unmap": true, 00:25:10.503 "flush": true, 00:25:10.503 "reset": true, 00:25:10.503 "nvme_admin": false, 00:25:10.503 "nvme_io": false, 00:25:10.503 "nvme_io_md": false, 00:25:10.503 "write_zeroes": true, 00:25:10.503 "zcopy": false, 00:25:10.503 "get_zone_info": false, 00:25:10.503 "zone_management": false, 00:25:10.503 "zone_append": false, 00:25:10.503 "compare": false, 00:25:10.503 "compare_and_write": false, 00:25:10.503 "abort": false, 00:25:10.503 "seek_hole": false, 00:25:10.503 "seek_data": false, 00:25:10.503 "copy": false, 00:25:10.503 "nvme_iov_md": false 00:25:10.503 }, 00:25:10.503 "memory_domains": [ 00:25:10.503 { 00:25:10.503 "dma_device_id": "system", 00:25:10.503 "dma_device_type": 1 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.503 "dma_device_type": 2 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "system", 00:25:10.503 "dma_device_type": 1 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.503 "dma_device_type": 2 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "system", 00:25:10.503 "dma_device_type": 1 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.503 "dma_device_type": 2 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "system", 00:25:10.503 "dma_device_type": 1 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.503 "dma_device_type": 2 00:25:10.503 } 00:25:10.503 ], 00:25:10.503 "driver_specific": { 00:25:10.503 "raid": { 00:25:10.503 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:10.503 "strip_size_kb": 64, 00:25:10.503 "state": "online", 00:25:10.503 "raid_level": "concat", 00:25:10.503 "superblock": true, 00:25:10.503 "num_base_bdevs": 4, 00:25:10.503 "num_base_bdevs_discovered": 4, 00:25:10.503 "num_base_bdevs_operational": 4, 00:25:10.503 "base_bdevs_list": [ 00:25:10.503 { 00:25:10.503 "name": "BaseBdev1", 00:25:10.503 "uuid": "87b39570-fc79-4050-9132-3d08dfd23d9d", 00:25:10.503 "is_configured": true, 00:25:10.503 "data_offset": 2048, 00:25:10.503 "data_size": 63488 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "name": "BaseBdev2", 00:25:10.503 "uuid": "9a3d36b2-6479-4551-b611-2f2b523d923d", 00:25:10.503 "is_configured": true, 00:25:10.503 "data_offset": 2048, 00:25:10.503 "data_size": 63488 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "name": "BaseBdev3", 00:25:10.503 "uuid": "dc0a1690-760a-44e2-a925-55094a9d2c13", 00:25:10.503 "is_configured": true, 00:25:10.503 "data_offset": 2048, 00:25:10.503 "data_size": 63488 00:25:10.503 }, 00:25:10.503 { 00:25:10.503 "name": "BaseBdev4", 00:25:10.503 "uuid": "9fde9b01-a289-48c3-b27a-ee07557579a4", 00:25:10.503 "is_configured": true, 00:25:10.503 "data_offset": 2048, 00:25:10.503 "data_size": 63488 00:25:10.503 } 00:25:10.503 ] 00:25:10.503 } 00:25:10.503 } 00:25:10.503 }' 00:25:10.503 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:10.504 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:10.504 BaseBdev2 00:25:10.504 BaseBdev3 00:25:10.504 BaseBdev4' 00:25:10.504 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.762 07:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.762 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:10.762 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:10.762 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:10.762 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.762 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 [2024-11-20 07:22:35.038232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:10.762 [2024-11-20 07:22:35.038457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:10.762 [2024-11-20 07:22:35.038643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:11.020 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.021 "name": "Existed_Raid", 00:25:11.021 "uuid": "748c1bf6-fa2d-4794-a956-7f4a77bdf7b5", 00:25:11.021 "strip_size_kb": 64, 00:25:11.021 "state": "offline", 00:25:11.021 "raid_level": "concat", 00:25:11.021 "superblock": true, 00:25:11.021 "num_base_bdevs": 4, 00:25:11.021 "num_base_bdevs_discovered": 3, 00:25:11.021 "num_base_bdevs_operational": 3, 00:25:11.021 "base_bdevs_list": [ 00:25:11.021 { 00:25:11.021 "name": null, 00:25:11.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.021 "is_configured": false, 00:25:11.021 "data_offset": 0, 00:25:11.021 "data_size": 63488 00:25:11.021 }, 00:25:11.021 { 00:25:11.021 "name": "BaseBdev2", 00:25:11.021 "uuid": "9a3d36b2-6479-4551-b611-2f2b523d923d", 00:25:11.021 "is_configured": true, 00:25:11.021 "data_offset": 2048, 00:25:11.021 "data_size": 63488 00:25:11.021 }, 00:25:11.021 { 00:25:11.021 "name": "BaseBdev3", 00:25:11.021 "uuid": "dc0a1690-760a-44e2-a925-55094a9d2c13", 00:25:11.021 "is_configured": true, 00:25:11.021 "data_offset": 2048, 00:25:11.021 "data_size": 63488 00:25:11.021 }, 00:25:11.021 { 00:25:11.021 "name": "BaseBdev4", 00:25:11.021 "uuid": "9fde9b01-a289-48c3-b27a-ee07557579a4", 00:25:11.021 "is_configured": true, 00:25:11.021 "data_offset": 2048, 00:25:11.021 "data_size": 63488 00:25:11.021 } 00:25:11.021 ] 00:25:11.021 }' 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.021 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.586 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:11.586 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:11.586 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.586 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:11.586 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.586 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.587 [2024-11-20 07:22:35.737150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.587 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.845 [2024-11-20 07:22:35.887829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.845 07:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.845 [2024-11-20 07:22:36.031258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:11.845 [2024-11-20 07:22:36.031449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.845 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.104 BaseBdev2 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.104 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.104 [ 00:25:12.104 { 00:25:12.104 "name": "BaseBdev2", 00:25:12.104 "aliases": [ 00:25:12.104 "d003aa07-4e3c-48f7-b4d6-9f4110c207e3" 00:25:12.104 ], 00:25:12.104 "product_name": "Malloc disk", 00:25:12.104 "block_size": 512, 00:25:12.104 "num_blocks": 65536, 00:25:12.104 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:12.104 "assigned_rate_limits": { 00:25:12.104 "rw_ios_per_sec": 0, 00:25:12.104 "rw_mbytes_per_sec": 0, 00:25:12.104 "r_mbytes_per_sec": 0, 00:25:12.104 "w_mbytes_per_sec": 0 00:25:12.104 }, 00:25:12.104 "claimed": false, 00:25:12.104 "zoned": false, 00:25:12.104 "supported_io_types": { 00:25:12.104 "read": true, 00:25:12.104 "write": true, 00:25:12.104 "unmap": true, 00:25:12.104 "flush": true, 00:25:12.104 "reset": true, 00:25:12.104 "nvme_admin": false, 00:25:12.104 "nvme_io": false, 00:25:12.104 "nvme_io_md": false, 00:25:12.104 "write_zeroes": true, 00:25:12.104 "zcopy": true, 00:25:12.104 "get_zone_info": false, 00:25:12.104 "zone_management": false, 00:25:12.104 "zone_append": false, 00:25:12.104 "compare": false, 00:25:12.104 "compare_and_write": false, 00:25:12.104 "abort": true, 00:25:12.104 "seek_hole": false, 00:25:12.104 "seek_data": false, 00:25:12.104 "copy": true, 00:25:12.104 "nvme_iov_md": false 00:25:12.104 }, 00:25:12.104 "memory_domains": [ 00:25:12.104 { 00:25:12.104 "dma_device_id": "system", 00:25:12.104 "dma_device_type": 1 00:25:12.104 }, 00:25:12.104 { 00:25:12.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.104 "dma_device_type": 2 00:25:12.105 } 00:25:12.105 ], 00:25:12.105 "driver_specific": {} 00:25:12.105 } 00:25:12.105 ] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.105 BaseBdev3 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.105 [ 00:25:12.105 { 00:25:12.105 "name": "BaseBdev3", 00:25:12.105 "aliases": [ 00:25:12.105 "1673ea35-14ed-4282-9d7a-c4b07db7c90e" 00:25:12.105 ], 00:25:12.105 "product_name": "Malloc disk", 00:25:12.105 "block_size": 512, 00:25:12.105 "num_blocks": 65536, 00:25:12.105 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:12.105 "assigned_rate_limits": { 00:25:12.105 "rw_ios_per_sec": 0, 00:25:12.105 "rw_mbytes_per_sec": 0, 00:25:12.105 "r_mbytes_per_sec": 0, 00:25:12.105 "w_mbytes_per_sec": 0 00:25:12.105 }, 00:25:12.105 "claimed": false, 00:25:12.105 "zoned": false, 00:25:12.105 "supported_io_types": { 00:25:12.105 "read": true, 00:25:12.105 "write": true, 00:25:12.105 "unmap": true, 00:25:12.105 "flush": true, 00:25:12.105 "reset": true, 00:25:12.105 "nvme_admin": false, 00:25:12.105 "nvme_io": false, 00:25:12.105 "nvme_io_md": false, 00:25:12.105 "write_zeroes": true, 00:25:12.105 "zcopy": true, 00:25:12.105 "get_zone_info": false, 00:25:12.105 "zone_management": false, 00:25:12.105 "zone_append": false, 00:25:12.105 "compare": false, 00:25:12.105 "compare_and_write": false, 00:25:12.105 "abort": true, 00:25:12.105 "seek_hole": false, 00:25:12.105 "seek_data": false, 00:25:12.105 "copy": true, 00:25:12.105 "nvme_iov_md": false 00:25:12.105 }, 00:25:12.105 "memory_domains": [ 00:25:12.105 { 00:25:12.105 "dma_device_id": "system", 00:25:12.105 "dma_device_type": 1 00:25:12.105 }, 00:25:12.105 { 00:25:12.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.105 "dma_device_type": 2 00:25:12.105 } 00:25:12.105 ], 00:25:12.105 "driver_specific": {} 00:25:12.105 } 00:25:12.105 ] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.105 BaseBdev4 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.105 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.105 [ 00:25:12.105 { 00:25:12.105 "name": "BaseBdev4", 00:25:12.364 "aliases": [ 00:25:12.364 "1df3ecfd-fcda-4be4-8c87-70a71d32f737" 00:25:12.364 ], 00:25:12.364 "product_name": "Malloc disk", 00:25:12.364 "block_size": 512, 00:25:12.364 "num_blocks": 65536, 00:25:12.364 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:12.364 "assigned_rate_limits": { 00:25:12.364 "rw_ios_per_sec": 0, 00:25:12.364 "rw_mbytes_per_sec": 0, 00:25:12.364 "r_mbytes_per_sec": 0, 00:25:12.364 "w_mbytes_per_sec": 0 00:25:12.364 }, 00:25:12.364 "claimed": false, 00:25:12.364 "zoned": false, 00:25:12.364 "supported_io_types": { 00:25:12.364 "read": true, 00:25:12.364 "write": true, 00:25:12.364 "unmap": true, 00:25:12.364 "flush": true, 00:25:12.364 "reset": true, 00:25:12.364 "nvme_admin": false, 00:25:12.364 "nvme_io": false, 00:25:12.364 "nvme_io_md": false, 00:25:12.364 "write_zeroes": true, 00:25:12.364 "zcopy": true, 00:25:12.364 "get_zone_info": false, 00:25:12.364 "zone_management": false, 00:25:12.364 "zone_append": false, 00:25:12.364 "compare": false, 00:25:12.364 "compare_and_write": false, 00:25:12.364 "abort": true, 00:25:12.364 "seek_hole": false, 00:25:12.364 "seek_data": false, 00:25:12.364 "copy": true, 00:25:12.364 "nvme_iov_md": false 00:25:12.364 }, 00:25:12.364 "memory_domains": [ 00:25:12.364 { 00:25:12.364 "dma_device_id": "system", 00:25:12.364 "dma_device_type": 1 00:25:12.364 }, 00:25:12.364 { 00:25:12.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.364 "dma_device_type": 2 00:25:12.364 } 00:25:12.364 ], 00:25:12.364 "driver_specific": {} 00:25:12.364 } 00:25:12.364 ] 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.364 [2024-11-20 07:22:36.405434] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:12.364 [2024-11-20 07:22:36.405622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:12.364 [2024-11-20 07:22:36.405768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.364 [2024-11-20 07:22:36.408238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:12.364 [2024-11-20 07:22:36.408427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.364 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.364 "name": "Existed_Raid", 00:25:12.364 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:12.364 "strip_size_kb": 64, 00:25:12.364 "state": "configuring", 00:25:12.364 "raid_level": "concat", 00:25:12.364 "superblock": true, 00:25:12.364 "num_base_bdevs": 4, 00:25:12.364 "num_base_bdevs_discovered": 3, 00:25:12.364 "num_base_bdevs_operational": 4, 00:25:12.364 "base_bdevs_list": [ 00:25:12.364 { 00:25:12.364 "name": "BaseBdev1", 00:25:12.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.364 "is_configured": false, 00:25:12.364 "data_offset": 0, 00:25:12.364 "data_size": 0 00:25:12.364 }, 00:25:12.364 { 00:25:12.364 "name": "BaseBdev2", 00:25:12.364 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:12.364 "is_configured": true, 00:25:12.364 "data_offset": 2048, 00:25:12.365 "data_size": 63488 00:25:12.365 }, 00:25:12.365 { 00:25:12.365 "name": "BaseBdev3", 00:25:12.365 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:12.365 "is_configured": true, 00:25:12.365 "data_offset": 2048, 00:25:12.365 "data_size": 63488 00:25:12.365 }, 00:25:12.365 { 00:25:12.365 "name": "BaseBdev4", 00:25:12.365 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:12.365 "is_configured": true, 00:25:12.365 "data_offset": 2048, 00:25:12.365 "data_size": 63488 00:25:12.365 } 00:25:12.365 ] 00:25:12.365 }' 00:25:12.365 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.365 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.930 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:12.930 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.930 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.930 [2024-11-20 07:22:36.937620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:12.930 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.930 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:12.930 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.931 "name": "Existed_Raid", 00:25:12.931 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:12.931 "strip_size_kb": 64, 00:25:12.931 "state": "configuring", 00:25:12.931 "raid_level": "concat", 00:25:12.931 "superblock": true, 00:25:12.931 "num_base_bdevs": 4, 00:25:12.931 "num_base_bdevs_discovered": 2, 00:25:12.931 "num_base_bdevs_operational": 4, 00:25:12.931 "base_bdevs_list": [ 00:25:12.931 { 00:25:12.931 "name": "BaseBdev1", 00:25:12.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.931 "is_configured": false, 00:25:12.931 "data_offset": 0, 00:25:12.931 "data_size": 0 00:25:12.931 }, 00:25:12.931 { 00:25:12.931 "name": null, 00:25:12.931 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:12.931 "is_configured": false, 00:25:12.931 "data_offset": 0, 00:25:12.931 "data_size": 63488 00:25:12.931 }, 00:25:12.931 { 00:25:12.931 "name": "BaseBdev3", 00:25:12.931 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:12.931 "is_configured": true, 00:25:12.931 "data_offset": 2048, 00:25:12.931 "data_size": 63488 00:25:12.931 }, 00:25:12.931 { 00:25:12.931 "name": "BaseBdev4", 00:25:12.931 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:12.931 "is_configured": true, 00:25:12.931 "data_offset": 2048, 00:25:12.931 "data_size": 63488 00:25:12.931 } 00:25:12.931 ] 00:25:12.931 }' 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.931 07:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.189 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.189 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.189 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.189 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:13.189 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.448 [2024-11-20 07:22:37.535468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.448 BaseBdev1 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.448 [ 00:25:13.448 { 00:25:13.448 "name": "BaseBdev1", 00:25:13.448 "aliases": [ 00:25:13.448 "3ed409f7-133d-45c6-a3e8-2e09953e74a9" 00:25:13.448 ], 00:25:13.448 "product_name": "Malloc disk", 00:25:13.448 "block_size": 512, 00:25:13.448 "num_blocks": 65536, 00:25:13.448 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:13.448 "assigned_rate_limits": { 00:25:13.448 "rw_ios_per_sec": 0, 00:25:13.448 "rw_mbytes_per_sec": 0, 00:25:13.448 "r_mbytes_per_sec": 0, 00:25:13.448 "w_mbytes_per_sec": 0 00:25:13.448 }, 00:25:13.448 "claimed": true, 00:25:13.448 "claim_type": "exclusive_write", 00:25:13.448 "zoned": false, 00:25:13.448 "supported_io_types": { 00:25:13.448 "read": true, 00:25:13.448 "write": true, 00:25:13.448 "unmap": true, 00:25:13.448 "flush": true, 00:25:13.448 "reset": true, 00:25:13.448 "nvme_admin": false, 00:25:13.448 "nvme_io": false, 00:25:13.448 "nvme_io_md": false, 00:25:13.448 "write_zeroes": true, 00:25:13.448 "zcopy": true, 00:25:13.448 "get_zone_info": false, 00:25:13.448 "zone_management": false, 00:25:13.448 "zone_append": false, 00:25:13.448 "compare": false, 00:25:13.448 "compare_and_write": false, 00:25:13.448 "abort": true, 00:25:13.448 "seek_hole": false, 00:25:13.448 "seek_data": false, 00:25:13.448 "copy": true, 00:25:13.448 "nvme_iov_md": false 00:25:13.448 }, 00:25:13.448 "memory_domains": [ 00:25:13.448 { 00:25:13.448 "dma_device_id": "system", 00:25:13.448 "dma_device_type": 1 00:25:13.448 }, 00:25:13.448 { 00:25:13.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.448 "dma_device_type": 2 00:25:13.448 } 00:25:13.448 ], 00:25:13.448 "driver_specific": {} 00:25:13.448 } 00:25:13.448 ] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.448 "name": "Existed_Raid", 00:25:13.448 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:13.448 "strip_size_kb": 64, 00:25:13.448 "state": "configuring", 00:25:13.448 "raid_level": "concat", 00:25:13.448 "superblock": true, 00:25:13.448 "num_base_bdevs": 4, 00:25:13.448 "num_base_bdevs_discovered": 3, 00:25:13.448 "num_base_bdevs_operational": 4, 00:25:13.448 "base_bdevs_list": [ 00:25:13.448 { 00:25:13.448 "name": "BaseBdev1", 00:25:13.448 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:13.448 "is_configured": true, 00:25:13.448 "data_offset": 2048, 00:25:13.448 "data_size": 63488 00:25:13.448 }, 00:25:13.448 { 00:25:13.448 "name": null, 00:25:13.448 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:13.448 "is_configured": false, 00:25:13.448 "data_offset": 0, 00:25:13.448 "data_size": 63488 00:25:13.448 }, 00:25:13.448 { 00:25:13.448 "name": "BaseBdev3", 00:25:13.448 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:13.448 "is_configured": true, 00:25:13.448 "data_offset": 2048, 00:25:13.448 "data_size": 63488 00:25:13.448 }, 00:25:13.448 { 00:25:13.448 "name": "BaseBdev4", 00:25:13.448 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:13.448 "is_configured": true, 00:25:13.448 "data_offset": 2048, 00:25:13.448 "data_size": 63488 00:25:13.448 } 00:25:13.448 ] 00:25:13.448 }' 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.448 07:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.015 [2024-11-20 07:22:38.155735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.015 "name": "Existed_Raid", 00:25:14.015 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:14.015 "strip_size_kb": 64, 00:25:14.015 "state": "configuring", 00:25:14.015 "raid_level": "concat", 00:25:14.015 "superblock": true, 00:25:14.015 "num_base_bdevs": 4, 00:25:14.015 "num_base_bdevs_discovered": 2, 00:25:14.015 "num_base_bdevs_operational": 4, 00:25:14.015 "base_bdevs_list": [ 00:25:14.015 { 00:25:14.015 "name": "BaseBdev1", 00:25:14.015 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:14.015 "is_configured": true, 00:25:14.015 "data_offset": 2048, 00:25:14.015 "data_size": 63488 00:25:14.015 }, 00:25:14.015 { 00:25:14.015 "name": null, 00:25:14.015 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:14.015 "is_configured": false, 00:25:14.015 "data_offset": 0, 00:25:14.015 "data_size": 63488 00:25:14.015 }, 00:25:14.015 { 00:25:14.015 "name": null, 00:25:14.015 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:14.015 "is_configured": false, 00:25:14.015 "data_offset": 0, 00:25:14.015 "data_size": 63488 00:25:14.015 }, 00:25:14.015 { 00:25:14.015 "name": "BaseBdev4", 00:25:14.015 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:14.015 "is_configured": true, 00:25:14.015 "data_offset": 2048, 00:25:14.015 "data_size": 63488 00:25:14.015 } 00:25:14.015 ] 00:25:14.015 }' 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.015 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:14.582 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 [2024-11-20 07:22:38.703876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.583 "name": "Existed_Raid", 00:25:14.583 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:14.583 "strip_size_kb": 64, 00:25:14.583 "state": "configuring", 00:25:14.583 "raid_level": "concat", 00:25:14.583 "superblock": true, 00:25:14.583 "num_base_bdevs": 4, 00:25:14.583 "num_base_bdevs_discovered": 3, 00:25:14.583 "num_base_bdevs_operational": 4, 00:25:14.583 "base_bdevs_list": [ 00:25:14.583 { 00:25:14.583 "name": "BaseBdev1", 00:25:14.583 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:14.583 "is_configured": true, 00:25:14.583 "data_offset": 2048, 00:25:14.583 "data_size": 63488 00:25:14.583 }, 00:25:14.583 { 00:25:14.583 "name": null, 00:25:14.583 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:14.583 "is_configured": false, 00:25:14.583 "data_offset": 0, 00:25:14.583 "data_size": 63488 00:25:14.583 }, 00:25:14.583 { 00:25:14.583 "name": "BaseBdev3", 00:25:14.583 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:14.583 "is_configured": true, 00:25:14.583 "data_offset": 2048, 00:25:14.583 "data_size": 63488 00:25:14.583 }, 00:25:14.583 { 00:25:14.583 "name": "BaseBdev4", 00:25:14.583 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:14.583 "is_configured": true, 00:25:14.583 "data_offset": 2048, 00:25:14.583 "data_size": 63488 00:25:14.583 } 00:25:14.583 ] 00:25:14.583 }' 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.583 07:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.187 [2024-11-20 07:22:39.284067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.187 "name": "Existed_Raid", 00:25:15.187 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:15.187 "strip_size_kb": 64, 00:25:15.187 "state": "configuring", 00:25:15.187 "raid_level": "concat", 00:25:15.187 "superblock": true, 00:25:15.187 "num_base_bdevs": 4, 00:25:15.187 "num_base_bdevs_discovered": 2, 00:25:15.187 "num_base_bdevs_operational": 4, 00:25:15.187 "base_bdevs_list": [ 00:25:15.187 { 00:25:15.187 "name": null, 00:25:15.187 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:15.187 "is_configured": false, 00:25:15.187 "data_offset": 0, 00:25:15.187 "data_size": 63488 00:25:15.187 }, 00:25:15.187 { 00:25:15.187 "name": null, 00:25:15.187 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:15.187 "is_configured": false, 00:25:15.187 "data_offset": 0, 00:25:15.187 "data_size": 63488 00:25:15.187 }, 00:25:15.187 { 00:25:15.187 "name": "BaseBdev3", 00:25:15.187 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:15.187 "is_configured": true, 00:25:15.187 "data_offset": 2048, 00:25:15.187 "data_size": 63488 00:25:15.187 }, 00:25:15.187 { 00:25:15.187 "name": "BaseBdev4", 00:25:15.187 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:15.187 "is_configured": true, 00:25:15.187 "data_offset": 2048, 00:25:15.187 "data_size": 63488 00:25:15.187 } 00:25:15.187 ] 00:25:15.187 }' 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.187 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.753 [2024-11-20 07:22:39.946112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.753 07:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.753 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.753 "name": "Existed_Raid", 00:25:15.753 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:15.753 "strip_size_kb": 64, 00:25:15.753 "state": "configuring", 00:25:15.753 "raid_level": "concat", 00:25:15.753 "superblock": true, 00:25:15.753 "num_base_bdevs": 4, 00:25:15.753 "num_base_bdevs_discovered": 3, 00:25:15.753 "num_base_bdevs_operational": 4, 00:25:15.753 "base_bdevs_list": [ 00:25:15.753 { 00:25:15.753 "name": null, 00:25:15.753 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:15.753 "is_configured": false, 00:25:15.753 "data_offset": 0, 00:25:15.753 "data_size": 63488 00:25:15.753 }, 00:25:15.753 { 00:25:15.753 "name": "BaseBdev2", 00:25:15.753 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:15.753 "is_configured": true, 00:25:15.753 "data_offset": 2048, 00:25:15.753 "data_size": 63488 00:25:15.753 }, 00:25:15.753 { 00:25:15.753 "name": "BaseBdev3", 00:25:15.753 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:15.753 "is_configured": true, 00:25:15.753 "data_offset": 2048, 00:25:15.753 "data_size": 63488 00:25:15.753 }, 00:25:15.753 { 00:25:15.753 "name": "BaseBdev4", 00:25:15.753 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:15.753 "is_configured": true, 00:25:15.753 "data_offset": 2048, 00:25:15.753 "data_size": 63488 00:25:15.753 } 00:25:15.753 ] 00:25:15.753 }' 00:25:15.753 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.753 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ed409f7-133d-45c6-a3e8-2e09953e74a9 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.320 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.579 [2024-11-20 07:22:40.623198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:16.579 [2024-11-20 07:22:40.623686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:16.579 [2024-11-20 07:22:40.623711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:16.579 NewBaseBdev 00:25:16.579 [2024-11-20 07:22:40.624045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:16.579 [2024-11-20 07:22:40.624226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:16.579 [2024-11-20 07:22:40.624255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:16.579 [2024-11-20 07:22:40.624424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.579 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.579 [ 00:25:16.579 { 00:25:16.579 "name": "NewBaseBdev", 00:25:16.579 "aliases": [ 00:25:16.579 "3ed409f7-133d-45c6-a3e8-2e09953e74a9" 00:25:16.579 ], 00:25:16.579 "product_name": "Malloc disk", 00:25:16.579 "block_size": 512, 00:25:16.579 "num_blocks": 65536, 00:25:16.579 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:16.579 "assigned_rate_limits": { 00:25:16.579 "rw_ios_per_sec": 0, 00:25:16.579 "rw_mbytes_per_sec": 0, 00:25:16.579 "r_mbytes_per_sec": 0, 00:25:16.579 "w_mbytes_per_sec": 0 00:25:16.579 }, 00:25:16.579 "claimed": true, 00:25:16.579 "claim_type": "exclusive_write", 00:25:16.579 "zoned": false, 00:25:16.579 "supported_io_types": { 00:25:16.579 "read": true, 00:25:16.579 "write": true, 00:25:16.579 "unmap": true, 00:25:16.579 "flush": true, 00:25:16.579 "reset": true, 00:25:16.579 "nvme_admin": false, 00:25:16.579 "nvme_io": false, 00:25:16.579 "nvme_io_md": false, 00:25:16.579 "write_zeroes": true, 00:25:16.579 "zcopy": true, 00:25:16.579 "get_zone_info": false, 00:25:16.579 "zone_management": false, 00:25:16.579 "zone_append": false, 00:25:16.579 "compare": false, 00:25:16.579 "compare_and_write": false, 00:25:16.579 "abort": true, 00:25:16.579 "seek_hole": false, 00:25:16.579 "seek_data": false, 00:25:16.579 "copy": true, 00:25:16.579 "nvme_iov_md": false 00:25:16.579 }, 00:25:16.580 "memory_domains": [ 00:25:16.580 { 00:25:16.580 "dma_device_id": "system", 00:25:16.580 "dma_device_type": 1 00:25:16.580 }, 00:25:16.580 { 00:25:16.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.580 "dma_device_type": 2 00:25:16.580 } 00:25:16.580 ], 00:25:16.580 "driver_specific": {} 00:25:16.580 } 00:25:16.580 ] 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.580 "name": "Existed_Raid", 00:25:16.580 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:16.580 "strip_size_kb": 64, 00:25:16.580 "state": "online", 00:25:16.580 "raid_level": "concat", 00:25:16.580 "superblock": true, 00:25:16.580 "num_base_bdevs": 4, 00:25:16.580 "num_base_bdevs_discovered": 4, 00:25:16.580 "num_base_bdevs_operational": 4, 00:25:16.580 "base_bdevs_list": [ 00:25:16.580 { 00:25:16.580 "name": "NewBaseBdev", 00:25:16.580 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:16.580 "is_configured": true, 00:25:16.580 "data_offset": 2048, 00:25:16.580 "data_size": 63488 00:25:16.580 }, 00:25:16.580 { 00:25:16.580 "name": "BaseBdev2", 00:25:16.580 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:16.580 "is_configured": true, 00:25:16.580 "data_offset": 2048, 00:25:16.580 "data_size": 63488 00:25:16.580 }, 00:25:16.580 { 00:25:16.580 "name": "BaseBdev3", 00:25:16.580 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:16.580 "is_configured": true, 00:25:16.580 "data_offset": 2048, 00:25:16.580 "data_size": 63488 00:25:16.580 }, 00:25:16.580 { 00:25:16.580 "name": "BaseBdev4", 00:25:16.580 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:16.580 "is_configured": true, 00:25:16.580 "data_offset": 2048, 00:25:16.580 "data_size": 63488 00:25:16.580 } 00:25:16.580 ] 00:25:16.580 }' 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.580 07:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.148 [2024-11-20 07:22:41.183859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.148 "name": "Existed_Raid", 00:25:17.148 "aliases": [ 00:25:17.148 "08ef0761-b3a6-457a-a85a-66b82ee3ce0c" 00:25:17.148 ], 00:25:17.148 "product_name": "Raid Volume", 00:25:17.148 "block_size": 512, 00:25:17.148 "num_blocks": 253952, 00:25:17.148 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:17.148 "assigned_rate_limits": { 00:25:17.148 "rw_ios_per_sec": 0, 00:25:17.148 "rw_mbytes_per_sec": 0, 00:25:17.148 "r_mbytes_per_sec": 0, 00:25:17.148 "w_mbytes_per_sec": 0 00:25:17.148 }, 00:25:17.148 "claimed": false, 00:25:17.148 "zoned": false, 00:25:17.148 "supported_io_types": { 00:25:17.148 "read": true, 00:25:17.148 "write": true, 00:25:17.148 "unmap": true, 00:25:17.148 "flush": true, 00:25:17.148 "reset": true, 00:25:17.148 "nvme_admin": false, 00:25:17.148 "nvme_io": false, 00:25:17.148 "nvme_io_md": false, 00:25:17.148 "write_zeroes": true, 00:25:17.148 "zcopy": false, 00:25:17.148 "get_zone_info": false, 00:25:17.148 "zone_management": false, 00:25:17.148 "zone_append": false, 00:25:17.148 "compare": false, 00:25:17.148 "compare_and_write": false, 00:25:17.148 "abort": false, 00:25:17.148 "seek_hole": false, 00:25:17.148 "seek_data": false, 00:25:17.148 "copy": false, 00:25:17.148 "nvme_iov_md": false 00:25:17.148 }, 00:25:17.148 "memory_domains": [ 00:25:17.148 { 00:25:17.148 "dma_device_id": "system", 00:25:17.148 "dma_device_type": 1 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.148 "dma_device_type": 2 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "system", 00:25:17.148 "dma_device_type": 1 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.148 "dma_device_type": 2 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "system", 00:25:17.148 "dma_device_type": 1 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.148 "dma_device_type": 2 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "system", 00:25:17.148 "dma_device_type": 1 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.148 "dma_device_type": 2 00:25:17.148 } 00:25:17.148 ], 00:25:17.148 "driver_specific": { 00:25:17.148 "raid": { 00:25:17.148 "uuid": "08ef0761-b3a6-457a-a85a-66b82ee3ce0c", 00:25:17.148 "strip_size_kb": 64, 00:25:17.148 "state": "online", 00:25:17.148 "raid_level": "concat", 00:25:17.148 "superblock": true, 00:25:17.148 "num_base_bdevs": 4, 00:25:17.148 "num_base_bdevs_discovered": 4, 00:25:17.148 "num_base_bdevs_operational": 4, 00:25:17.148 "base_bdevs_list": [ 00:25:17.148 { 00:25:17.148 "name": "NewBaseBdev", 00:25:17.148 "uuid": "3ed409f7-133d-45c6-a3e8-2e09953e74a9", 00:25:17.148 "is_configured": true, 00:25:17.148 "data_offset": 2048, 00:25:17.148 "data_size": 63488 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "name": "BaseBdev2", 00:25:17.148 "uuid": "d003aa07-4e3c-48f7-b4d6-9f4110c207e3", 00:25:17.148 "is_configured": true, 00:25:17.148 "data_offset": 2048, 00:25:17.148 "data_size": 63488 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "name": "BaseBdev3", 00:25:17.148 "uuid": "1673ea35-14ed-4282-9d7a-c4b07db7c90e", 00:25:17.148 "is_configured": true, 00:25:17.148 "data_offset": 2048, 00:25:17.148 "data_size": 63488 00:25:17.148 }, 00:25:17.148 { 00:25:17.148 "name": "BaseBdev4", 00:25:17.148 "uuid": "1df3ecfd-fcda-4be4-8c87-70a71d32f737", 00:25:17.148 "is_configured": true, 00:25:17.148 "data_offset": 2048, 00:25:17.148 "data_size": 63488 00:25:17.148 } 00:25:17.148 ] 00:25:17.148 } 00:25:17.148 } 00:25:17.148 }' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:17.148 BaseBdev2 00:25:17.148 BaseBdev3 00:25:17.148 BaseBdev4' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.148 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.407 [2024-11-20 07:22:41.535474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:17.407 [2024-11-20 07:22:41.535643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:17.407 [2024-11-20 07:22:41.535867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:17.407 [2024-11-20 07:22:41.536061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:17.407 [2024-11-20 07:22:41.536173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72282 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72282 ']' 00:25:17.407 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72282 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72282 00:25:17.408 killing process with pid 72282 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72282' 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72282 00:25:17.408 [2024-11-20 07:22:41.571821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:17.408 07:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72282 00:25:17.667 [2024-11-20 07:22:41.927269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:19.044 ************************************ 00:25:19.044 END TEST raid_state_function_test_sb 00:25:19.044 ************************************ 00:25:19.044 07:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:19.044 00:25:19.044 real 0m12.843s 00:25:19.044 user 0m21.377s 00:25:19.044 sys 0m1.726s 00:25:19.044 07:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.044 07:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.044 07:22:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:25:19.044 07:22:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:19.044 07:22:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.044 07:22:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:19.044 ************************************ 00:25:19.044 START TEST raid_superblock_test 00:25:19.044 ************************************ 00:25:19.044 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:25:19.044 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:19.044 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:25:19.044 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:19.044 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72969 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72969 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72969 ']' 00:25:19.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.045 07:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.045 [2024-11-20 07:22:43.083667] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:19.045 [2024-11-20 07:22:43.083827] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72969 ] 00:25:19.045 [2024-11-20 07:22:43.258189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.304 [2024-11-20 07:22:43.386544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.304 [2024-11-20 07:22:43.588723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.304 [2024-11-20 07:22:43.589039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.871 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.871 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.872 malloc1 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.872 [2024-11-20 07:22:44.089794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:19.872 [2024-11-20 07:22:44.090012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.872 [2024-11-20 07:22:44.090164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:19.872 [2024-11-20 07:22:44.090291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.872 [2024-11-20 07:22:44.093176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.872 [2024-11-20 07:22:44.093338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:19.872 pt1 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.872 malloc2 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.872 [2024-11-20 07:22:44.145523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:19.872 [2024-11-20 07:22:44.145606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.872 [2024-11-20 07:22:44.145640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:19.872 [2024-11-20 07:22:44.145655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.872 [2024-11-20 07:22:44.148386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.872 pt2 00:25:19.872 [2024-11-20 07:22:44.148550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.872 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.131 malloc3 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.131 [2024-11-20 07:22:44.208051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:20.131 [2024-11-20 07:22:44.208245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.131 [2024-11-20 07:22:44.208321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:20.131 [2024-11-20 07:22:44.208438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.131 [2024-11-20 07:22:44.211178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.131 [2024-11-20 07:22:44.211224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:20.131 pt3 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.131 malloc4 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.131 [2024-11-20 07:22:44.259792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:20.131 [2024-11-20 07:22:44.259981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.131 [2024-11-20 07:22:44.260059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:20.131 [2024-11-20 07:22:44.260268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.131 [2024-11-20 07:22:44.263048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.131 [2024-11-20 07:22:44.263094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:20.131 pt4 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:20.131 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.132 [2024-11-20 07:22:44.267903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:20.132 [2024-11-20 07:22:44.270347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:20.132 [2024-11-20 07:22:44.270559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:20.132 [2024-11-20 07:22:44.270732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:20.132 [2024-11-20 07:22:44.271040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:20.132 [2024-11-20 07:22:44.271156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:20.132 [2024-11-20 07:22:44.271534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:20.132 [2024-11-20 07:22:44.271885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:20.132 [2024-11-20 07:22:44.272013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:20.132 [2024-11-20 07:22:44.272265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.132 "name": "raid_bdev1", 00:25:20.132 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:20.132 "strip_size_kb": 64, 00:25:20.132 "state": "online", 00:25:20.132 "raid_level": "concat", 00:25:20.132 "superblock": true, 00:25:20.132 "num_base_bdevs": 4, 00:25:20.132 "num_base_bdevs_discovered": 4, 00:25:20.132 "num_base_bdevs_operational": 4, 00:25:20.132 "base_bdevs_list": [ 00:25:20.132 { 00:25:20.132 "name": "pt1", 00:25:20.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:20.132 "is_configured": true, 00:25:20.132 "data_offset": 2048, 00:25:20.132 "data_size": 63488 00:25:20.132 }, 00:25:20.132 { 00:25:20.132 "name": "pt2", 00:25:20.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.132 "is_configured": true, 00:25:20.132 "data_offset": 2048, 00:25:20.132 "data_size": 63488 00:25:20.132 }, 00:25:20.132 { 00:25:20.132 "name": "pt3", 00:25:20.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:20.132 "is_configured": true, 00:25:20.132 "data_offset": 2048, 00:25:20.132 "data_size": 63488 00:25:20.132 }, 00:25:20.132 { 00:25:20.132 "name": "pt4", 00:25:20.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:20.132 "is_configured": true, 00:25:20.132 "data_offset": 2048, 00:25:20.132 "data_size": 63488 00:25:20.132 } 00:25:20.132 ] 00:25:20.132 }' 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.132 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.699 [2024-11-20 07:22:44.764770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:20.699 "name": "raid_bdev1", 00:25:20.699 "aliases": [ 00:25:20.699 "afa09d16-66b1-484b-a037-81c4a3921582" 00:25:20.699 ], 00:25:20.699 "product_name": "Raid Volume", 00:25:20.699 "block_size": 512, 00:25:20.699 "num_blocks": 253952, 00:25:20.699 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:20.699 "assigned_rate_limits": { 00:25:20.699 "rw_ios_per_sec": 0, 00:25:20.699 "rw_mbytes_per_sec": 0, 00:25:20.699 "r_mbytes_per_sec": 0, 00:25:20.699 "w_mbytes_per_sec": 0 00:25:20.699 }, 00:25:20.699 "claimed": false, 00:25:20.699 "zoned": false, 00:25:20.699 "supported_io_types": { 00:25:20.699 "read": true, 00:25:20.699 "write": true, 00:25:20.699 "unmap": true, 00:25:20.699 "flush": true, 00:25:20.699 "reset": true, 00:25:20.699 "nvme_admin": false, 00:25:20.699 "nvme_io": false, 00:25:20.699 "nvme_io_md": false, 00:25:20.699 "write_zeroes": true, 00:25:20.699 "zcopy": false, 00:25:20.699 "get_zone_info": false, 00:25:20.699 "zone_management": false, 00:25:20.699 "zone_append": false, 00:25:20.699 "compare": false, 00:25:20.699 "compare_and_write": false, 00:25:20.699 "abort": false, 00:25:20.699 "seek_hole": false, 00:25:20.699 "seek_data": false, 00:25:20.699 "copy": false, 00:25:20.699 "nvme_iov_md": false 00:25:20.699 }, 00:25:20.699 "memory_domains": [ 00:25:20.699 { 00:25:20.699 "dma_device_id": "system", 00:25:20.699 "dma_device_type": 1 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.699 "dma_device_type": 2 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "system", 00:25:20.699 "dma_device_type": 1 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.699 "dma_device_type": 2 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "system", 00:25:20.699 "dma_device_type": 1 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.699 "dma_device_type": 2 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "system", 00:25:20.699 "dma_device_type": 1 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.699 "dma_device_type": 2 00:25:20.699 } 00:25:20.699 ], 00:25:20.699 "driver_specific": { 00:25:20.699 "raid": { 00:25:20.699 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:20.699 "strip_size_kb": 64, 00:25:20.699 "state": "online", 00:25:20.699 "raid_level": "concat", 00:25:20.699 "superblock": true, 00:25:20.699 "num_base_bdevs": 4, 00:25:20.699 "num_base_bdevs_discovered": 4, 00:25:20.699 "num_base_bdevs_operational": 4, 00:25:20.699 "base_bdevs_list": [ 00:25:20.699 { 00:25:20.699 "name": "pt1", 00:25:20.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:20.699 "is_configured": true, 00:25:20.699 "data_offset": 2048, 00:25:20.699 "data_size": 63488 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "name": "pt2", 00:25:20.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.699 "is_configured": true, 00:25:20.699 "data_offset": 2048, 00:25:20.699 "data_size": 63488 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "name": "pt3", 00:25:20.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:20.699 "is_configured": true, 00:25:20.699 "data_offset": 2048, 00:25:20.699 "data_size": 63488 00:25:20.699 }, 00:25:20.699 { 00:25:20.699 "name": "pt4", 00:25:20.699 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:20.699 "is_configured": true, 00:25:20.699 "data_offset": 2048, 00:25:20.699 "data_size": 63488 00:25:20.699 } 00:25:20.699 ] 00:25:20.699 } 00:25:20.699 } 00:25:20.699 }' 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:20.699 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:20.699 pt2 00:25:20.700 pt3 00:25:20.700 pt4' 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.700 07:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:20.959 [2024-11-20 07:22:45.120849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=afa09d16-66b1-484b-a037-81c4a3921582 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z afa09d16-66b1-484b-a037-81c4a3921582 ']' 00:25:20.959 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.960 [2024-11-20 07:22:45.188467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.960 [2024-11-20 07:22:45.188634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.960 [2024-11-20 07:22:45.188769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.960 [2024-11-20 07:22:45.188871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.960 [2024-11-20 07:22:45.188902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.960 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.220 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.220 [2024-11-20 07:22:45.356523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:21.220 [2024-11-20 07:22:45.359170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:21.220 [2024-11-20 07:22:45.359361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:21.220 [2024-11-20 07:22:45.359429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:21.220 [2024-11-20 07:22:45.359513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:21.220 [2024-11-20 07:22:45.359617] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:21.220 [2024-11-20 07:22:45.359652] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:21.221 [2024-11-20 07:22:45.359683] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:21.221 [2024-11-20 07:22:45.359703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.221 [2024-11-20 07:22:45.359720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:21.221 request: 00:25:21.221 { 00:25:21.221 "name": "raid_bdev1", 00:25:21.221 "raid_level": "concat", 00:25:21.221 "base_bdevs": [ 00:25:21.221 "malloc1", 00:25:21.221 "malloc2", 00:25:21.221 "malloc3", 00:25:21.221 "malloc4" 00:25:21.221 ], 00:25:21.221 "strip_size_kb": 64, 00:25:21.221 "superblock": false, 00:25:21.221 "method": "bdev_raid_create", 00:25:21.221 "req_id": 1 00:25:21.221 } 00:25:21.221 Got JSON-RPC error response 00:25:21.221 response: 00:25:21.221 { 00:25:21.221 "code": -17, 00:25:21.221 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:21.221 } 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.221 [2024-11-20 07:22:45.424553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:21.221 [2024-11-20 07:22:45.424779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.221 [2024-11-20 07:22:45.424849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:21.221 [2024-11-20 07:22:45.424960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.221 [2024-11-20 07:22:45.427884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.221 [2024-11-20 07:22:45.428058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:21.221 [2024-11-20 07:22:45.428270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:21.221 [2024-11-20 07:22:45.428492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:21.221 pt1 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.221 "name": "raid_bdev1", 00:25:21.221 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:21.221 "strip_size_kb": 64, 00:25:21.221 "state": "configuring", 00:25:21.221 "raid_level": "concat", 00:25:21.221 "superblock": true, 00:25:21.221 "num_base_bdevs": 4, 00:25:21.221 "num_base_bdevs_discovered": 1, 00:25:21.221 "num_base_bdevs_operational": 4, 00:25:21.221 "base_bdevs_list": [ 00:25:21.221 { 00:25:21.221 "name": "pt1", 00:25:21.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:21.221 "is_configured": true, 00:25:21.221 "data_offset": 2048, 00:25:21.221 "data_size": 63488 00:25:21.221 }, 00:25:21.221 { 00:25:21.221 "name": null, 00:25:21.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:21.221 "is_configured": false, 00:25:21.221 "data_offset": 2048, 00:25:21.221 "data_size": 63488 00:25:21.221 }, 00:25:21.221 { 00:25:21.221 "name": null, 00:25:21.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:21.221 "is_configured": false, 00:25:21.221 "data_offset": 2048, 00:25:21.221 "data_size": 63488 00:25:21.221 }, 00:25:21.221 { 00:25:21.221 "name": null, 00:25:21.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:21.221 "is_configured": false, 00:25:21.221 "data_offset": 2048, 00:25:21.221 "data_size": 63488 00:25:21.221 } 00:25:21.221 ] 00:25:21.221 }' 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.221 07:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.789 [2024-11-20 07:22:46.025040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:21.789 [2024-11-20 07:22:46.025135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.789 [2024-11-20 07:22:46.025165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:21.789 [2024-11-20 07:22:46.025182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.789 [2024-11-20 07:22:46.025779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.789 [2024-11-20 07:22:46.025818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:21.789 [2024-11-20 07:22:46.025924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:21.789 [2024-11-20 07:22:46.025967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:21.789 pt2 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.789 [2024-11-20 07:22:46.033047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.789 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.047 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.047 "name": "raid_bdev1", 00:25:22.047 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:22.047 "strip_size_kb": 64, 00:25:22.047 "state": "configuring", 00:25:22.047 "raid_level": "concat", 00:25:22.047 "superblock": true, 00:25:22.047 "num_base_bdevs": 4, 00:25:22.047 "num_base_bdevs_discovered": 1, 00:25:22.047 "num_base_bdevs_operational": 4, 00:25:22.047 "base_bdevs_list": [ 00:25:22.047 { 00:25:22.047 "name": "pt1", 00:25:22.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:22.047 "is_configured": true, 00:25:22.047 "data_offset": 2048, 00:25:22.047 "data_size": 63488 00:25:22.047 }, 00:25:22.047 { 00:25:22.047 "name": null, 00:25:22.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:22.047 "is_configured": false, 00:25:22.047 "data_offset": 0, 00:25:22.047 "data_size": 63488 00:25:22.047 }, 00:25:22.047 { 00:25:22.047 "name": null, 00:25:22.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:22.047 "is_configured": false, 00:25:22.047 "data_offset": 2048, 00:25:22.047 "data_size": 63488 00:25:22.047 }, 00:25:22.047 { 00:25:22.047 "name": null, 00:25:22.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:22.047 "is_configured": false, 00:25:22.047 "data_offset": 2048, 00:25:22.047 "data_size": 63488 00:25:22.047 } 00:25:22.047 ] 00:25:22.047 }' 00:25:22.047 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.048 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.307 [2024-11-20 07:22:46.577172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:22.307 [2024-11-20 07:22:46.577391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.307 [2024-11-20 07:22:46.577468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:22.307 [2024-11-20 07:22:46.577711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.307 [2024-11-20 07:22:46.578303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.307 [2024-11-20 07:22:46.578336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:22.307 [2024-11-20 07:22:46.578456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:22.307 [2024-11-20 07:22:46.578492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:22.307 pt2 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.307 [2024-11-20 07:22:46.585131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:22.307 [2024-11-20 07:22:46.585191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.307 [2024-11-20 07:22:46.585225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:22.307 [2024-11-20 07:22:46.585241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.307 [2024-11-20 07:22:46.585725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.307 [2024-11-20 07:22:46.585765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:22.307 [2024-11-20 07:22:46.585853] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:22.307 [2024-11-20 07:22:46.585881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:22.307 pt3 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.307 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.307 [2024-11-20 07:22:46.593110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:22.307 [2024-11-20 07:22:46.593301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.307 [2024-11-20 07:22:46.593373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:22.307 [2024-11-20 07:22:46.593493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.307 [2024-11-20 07:22:46.594028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.307 [2024-11-20 07:22:46.594180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:22.308 [2024-11-20 07:22:46.594396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:22.308 [2024-11-20 07:22:46.594546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:22.308 [2024-11-20 07:22:46.594856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:22.308 [2024-11-20 07:22:46.594976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:22.308 [2024-11-20 07:22:46.595338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:22.308 [2024-11-20 07:22:46.595655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:22.567 [2024-11-20 07:22:46.595796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raidpt4 00:25:22.567 _bdev 0x617000007e80 00:25:22.567 [2024-11-20 07:22:46.596055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.567 "name": "raid_bdev1", 00:25:22.567 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:22.567 "strip_size_kb": 64, 00:25:22.567 "state": "online", 00:25:22.567 "raid_level": "concat", 00:25:22.567 "superblock": true, 00:25:22.567 "num_base_bdevs": 4, 00:25:22.567 "num_base_bdevs_discovered": 4, 00:25:22.567 "num_base_bdevs_operational": 4, 00:25:22.567 "base_bdevs_list": [ 00:25:22.567 { 00:25:22.567 "name": "pt1", 00:25:22.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:22.567 "is_configured": true, 00:25:22.567 "data_offset": 2048, 00:25:22.567 "data_size": 63488 00:25:22.567 }, 00:25:22.567 { 00:25:22.567 "name": "pt2", 00:25:22.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:22.567 "is_configured": true, 00:25:22.567 "data_offset": 2048, 00:25:22.567 "data_size": 63488 00:25:22.567 }, 00:25:22.567 { 00:25:22.567 "name": "pt3", 00:25:22.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:22.567 "is_configured": true, 00:25:22.567 "data_offset": 2048, 00:25:22.567 "data_size": 63488 00:25:22.567 }, 00:25:22.567 { 00:25:22.567 "name": "pt4", 00:25:22.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:22.567 "is_configured": true, 00:25:22.567 "data_offset": 2048, 00:25:22.567 "data_size": 63488 00:25:22.567 } 00:25:22.567 ] 00:25:22.567 }' 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.567 07:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:22.827 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.087 [2024-11-20 07:22:47.113713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:23.087 "name": "raid_bdev1", 00:25:23.087 "aliases": [ 00:25:23.087 "afa09d16-66b1-484b-a037-81c4a3921582" 00:25:23.087 ], 00:25:23.087 "product_name": "Raid Volume", 00:25:23.087 "block_size": 512, 00:25:23.087 "num_blocks": 253952, 00:25:23.087 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:23.087 "assigned_rate_limits": { 00:25:23.087 "rw_ios_per_sec": 0, 00:25:23.087 "rw_mbytes_per_sec": 0, 00:25:23.087 "r_mbytes_per_sec": 0, 00:25:23.087 "w_mbytes_per_sec": 0 00:25:23.087 }, 00:25:23.087 "claimed": false, 00:25:23.087 "zoned": false, 00:25:23.087 "supported_io_types": { 00:25:23.087 "read": true, 00:25:23.087 "write": true, 00:25:23.087 "unmap": true, 00:25:23.087 "flush": true, 00:25:23.087 "reset": true, 00:25:23.087 "nvme_admin": false, 00:25:23.087 "nvme_io": false, 00:25:23.087 "nvme_io_md": false, 00:25:23.087 "write_zeroes": true, 00:25:23.087 "zcopy": false, 00:25:23.087 "get_zone_info": false, 00:25:23.087 "zone_management": false, 00:25:23.087 "zone_append": false, 00:25:23.087 "compare": false, 00:25:23.087 "compare_and_write": false, 00:25:23.087 "abort": false, 00:25:23.087 "seek_hole": false, 00:25:23.087 "seek_data": false, 00:25:23.087 "copy": false, 00:25:23.087 "nvme_iov_md": false 00:25:23.087 }, 00:25:23.087 "memory_domains": [ 00:25:23.087 { 00:25:23.087 "dma_device_id": "system", 00:25:23.087 "dma_device_type": 1 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.087 "dma_device_type": 2 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "system", 00:25:23.087 "dma_device_type": 1 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.087 "dma_device_type": 2 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "system", 00:25:23.087 "dma_device_type": 1 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.087 "dma_device_type": 2 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "system", 00:25:23.087 "dma_device_type": 1 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.087 "dma_device_type": 2 00:25:23.087 } 00:25:23.087 ], 00:25:23.087 "driver_specific": { 00:25:23.087 "raid": { 00:25:23.087 "uuid": "afa09d16-66b1-484b-a037-81c4a3921582", 00:25:23.087 "strip_size_kb": 64, 00:25:23.087 "state": "online", 00:25:23.087 "raid_level": "concat", 00:25:23.087 "superblock": true, 00:25:23.087 "num_base_bdevs": 4, 00:25:23.087 "num_base_bdevs_discovered": 4, 00:25:23.087 "num_base_bdevs_operational": 4, 00:25:23.087 "base_bdevs_list": [ 00:25:23.087 { 00:25:23.087 "name": "pt1", 00:25:23.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:23.087 "is_configured": true, 00:25:23.087 "data_offset": 2048, 00:25:23.087 "data_size": 63488 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "name": "pt2", 00:25:23.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:23.087 "is_configured": true, 00:25:23.087 "data_offset": 2048, 00:25:23.087 "data_size": 63488 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "name": "pt3", 00:25:23.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:23.087 "is_configured": true, 00:25:23.087 "data_offset": 2048, 00:25:23.087 "data_size": 63488 00:25:23.087 }, 00:25:23.087 { 00:25:23.087 "name": "pt4", 00:25:23.087 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:23.087 "is_configured": true, 00:25:23.087 "data_offset": 2048, 00:25:23.087 "data_size": 63488 00:25:23.087 } 00:25:23.087 ] 00:25:23.087 } 00:25:23.087 } 00:25:23.087 }' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:23.087 pt2 00:25:23.087 pt3 00:25:23.087 pt4' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.087 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.354 [2024-11-20 07:22:47.481764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' afa09d16-66b1-484b-a037-81c4a3921582 '!=' afa09d16-66b1-484b-a037-81c4a3921582 ']' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72969 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72969 ']' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72969 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72969 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72969' 00:25:23.354 killing process with pid 72969 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72969 00:25:23.354 [2024-11-20 07:22:47.560649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.354 07:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72969 00:25:23.354 [2024-11-20 07:22:47.560907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.354 [2024-11-20 07:22:47.561139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.354 [2024-11-20 07:22:47.561166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:23.922 [2024-11-20 07:22:47.921028] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.859 ************************************ 00:25:24.859 END TEST raid_superblock_test 00:25:24.859 ************************************ 00:25:24.859 07:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:24.859 00:25:24.859 real 0m5.948s 00:25:24.859 user 0m9.026s 00:25:24.859 sys 0m0.803s 00:25:24.859 07:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.859 07:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.859 07:22:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:24.859 07:22:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:24.859 07:22:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.859 07:22:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:24.859 ************************************ 00:25:24.859 START TEST raid_read_error_test 00:25:24.859 ************************************ 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oW5vUL47aQ 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73235 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73235 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73235 ']' 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.859 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.860 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.860 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.860 07:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.860 [2024-11-20 07:22:49.100372] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:24.860 [2024-11-20 07:22:49.100534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73235 ] 00:25:25.118 [2024-11-20 07:22:49.277799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.118 [2024-11-20 07:22:49.406386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.377 [2024-11-20 07:22:49.607621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.377 [2024-11-20 07:22:49.607678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 BaseBdev1_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 true 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 [2024-11-20 07:22:50.139248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:25.945 [2024-11-20 07:22:50.139325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.945 [2024-11-20 07:22:50.139355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:25.945 [2024-11-20 07:22:50.139374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.945 [2024-11-20 07:22:50.142147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.945 [2024-11-20 07:22:50.142197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:25.945 BaseBdev1 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 BaseBdev2_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 true 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.945 [2024-11-20 07:22:50.195242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:25.945 [2024-11-20 07:22:50.195314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.945 [2024-11-20 07:22:50.195341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:25.945 [2024-11-20 07:22:50.195359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.945 [2024-11-20 07:22:50.198167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.945 [2024-11-20 07:22:50.198219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:25.945 BaseBdev2 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.945 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 BaseBdev3_malloc 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 true 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 [2024-11-20 07:22:50.261101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:26.205 [2024-11-20 07:22:50.261201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.205 [2024-11-20 07:22:50.261229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:26.205 [2024-11-20 07:22:50.261247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.205 [2024-11-20 07:22:50.264043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.205 [2024-11-20 07:22:50.264108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:26.205 BaseBdev3 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 BaseBdev4_malloc 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 true 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 [2024-11-20 07:22:50.317097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:26.205 [2024-11-20 07:22:50.317213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.205 [2024-11-20 07:22:50.317241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:26.205 [2024-11-20 07:22:50.317259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.205 [2024-11-20 07:22:50.320132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.205 [2024-11-20 07:22:50.320217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:26.205 BaseBdev4 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.205 [2024-11-20 07:22:50.325167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:26.205 [2024-11-20 07:22:50.327692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.205 [2024-11-20 07:22:50.327820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:26.205 [2024-11-20 07:22:50.327920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:26.205 [2024-11-20 07:22:50.328226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:25:26.205 [2024-11-20 07:22:50.328249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:26.205 [2024-11-20 07:22:50.328593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:25:26.205 [2024-11-20 07:22:50.328827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:25:26.205 [2024-11-20 07:22:50.328846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:25:26.205 [2024-11-20 07:22:50.329123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.205 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.206 "name": "raid_bdev1", 00:25:26.206 "uuid": "95c87c03-119c-4050-a995-16adfce4698b", 00:25:26.206 "strip_size_kb": 64, 00:25:26.206 "state": "online", 00:25:26.206 "raid_level": "concat", 00:25:26.206 "superblock": true, 00:25:26.206 "num_base_bdevs": 4, 00:25:26.206 "num_base_bdevs_discovered": 4, 00:25:26.206 "num_base_bdevs_operational": 4, 00:25:26.206 "base_bdevs_list": [ 00:25:26.206 { 00:25:26.206 "name": "BaseBdev1", 00:25:26.206 "uuid": "32356330-2526-5444-bb96-ff19f5cf362b", 00:25:26.206 "is_configured": true, 00:25:26.206 "data_offset": 2048, 00:25:26.206 "data_size": 63488 00:25:26.206 }, 00:25:26.206 { 00:25:26.206 "name": "BaseBdev2", 00:25:26.206 "uuid": "ee82e957-026c-54c4-b7b2-97ae49b3f8a0", 00:25:26.206 "is_configured": true, 00:25:26.206 "data_offset": 2048, 00:25:26.206 "data_size": 63488 00:25:26.206 }, 00:25:26.206 { 00:25:26.206 "name": "BaseBdev3", 00:25:26.206 "uuid": "e1a1f725-f362-5a66-8a05-da71bb881e7a", 00:25:26.206 "is_configured": true, 00:25:26.206 "data_offset": 2048, 00:25:26.206 "data_size": 63488 00:25:26.206 }, 00:25:26.206 { 00:25:26.206 "name": "BaseBdev4", 00:25:26.206 "uuid": "808cc927-573f-54d1-b34f-840bea2f28db", 00:25:26.206 "is_configured": true, 00:25:26.206 "data_offset": 2048, 00:25:26.206 "data_size": 63488 00:25:26.206 } 00:25:26.206 ] 00:25:26.206 }' 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.206 07:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.774 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:26.774 07:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:26.774 [2024-11-20 07:22:50.942806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.714 "name": "raid_bdev1", 00:25:27.714 "uuid": "95c87c03-119c-4050-a995-16adfce4698b", 00:25:27.714 "strip_size_kb": 64, 00:25:27.714 "state": "online", 00:25:27.714 "raid_level": "concat", 00:25:27.714 "superblock": true, 00:25:27.714 "num_base_bdevs": 4, 00:25:27.714 "num_base_bdevs_discovered": 4, 00:25:27.714 "num_base_bdevs_operational": 4, 00:25:27.714 "base_bdevs_list": [ 00:25:27.714 { 00:25:27.714 "name": "BaseBdev1", 00:25:27.714 "uuid": "32356330-2526-5444-bb96-ff19f5cf362b", 00:25:27.714 "is_configured": true, 00:25:27.714 "data_offset": 2048, 00:25:27.714 "data_size": 63488 00:25:27.714 }, 00:25:27.714 { 00:25:27.714 "name": "BaseBdev2", 00:25:27.714 "uuid": "ee82e957-026c-54c4-b7b2-97ae49b3f8a0", 00:25:27.714 "is_configured": true, 00:25:27.714 "data_offset": 2048, 00:25:27.714 "data_size": 63488 00:25:27.714 }, 00:25:27.714 { 00:25:27.714 "name": "BaseBdev3", 00:25:27.714 "uuid": "e1a1f725-f362-5a66-8a05-da71bb881e7a", 00:25:27.714 "is_configured": true, 00:25:27.714 "data_offset": 2048, 00:25:27.714 "data_size": 63488 00:25:27.714 }, 00:25:27.714 { 00:25:27.714 "name": "BaseBdev4", 00:25:27.714 "uuid": "808cc927-573f-54d1-b34f-840bea2f28db", 00:25:27.714 "is_configured": true, 00:25:27.714 "data_offset": 2048, 00:25:27.714 "data_size": 63488 00:25:27.714 } 00:25:27.714 ] 00:25:27.714 }' 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.714 07:22:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.282 [2024-11-20 07:22:52.385774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.282 [2024-11-20 07:22:52.385830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.282 [2024-11-20 07:22:52.389318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.282 [2024-11-20 07:22:52.389441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.282 [2024-11-20 07:22:52.389509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.282 [2024-11-20 07:22:52.389532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:25:28.282 { 00:25:28.282 "results": [ 00:25:28.282 { 00:25:28.282 "job": "raid_bdev1", 00:25:28.282 "core_mask": "0x1", 00:25:28.282 "workload": "randrw", 00:25:28.282 "percentage": 50, 00:25:28.282 "status": "finished", 00:25:28.282 "queue_depth": 1, 00:25:28.282 "io_size": 131072, 00:25:28.282 "runtime": 1.440578, 00:25:28.282 "iops": 10668.63439536075, 00:25:28.282 "mibps": 1333.5792994200938, 00:25:28.282 "io_failed": 1, 00:25:28.282 "io_timeout": 0, 00:25:28.282 "avg_latency_us": 130.93068291240314, 00:25:28.282 "min_latency_us": 35.60727272727273, 00:25:28.282 "max_latency_us": 2055.447272727273 00:25:28.282 } 00:25:28.282 ], 00:25:28.282 "core_count": 1 00:25:28.282 } 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73235 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73235 ']' 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73235 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73235 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.282 killing process with pid 73235 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73235' 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73235 00:25:28.282 [2024-11-20 07:22:52.423076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:28.282 07:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73235 00:25:28.541 [2024-11-20 07:22:52.697506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:29.917 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oW5vUL47aQ 00:25:29.917 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:29.917 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:25:29.918 00:25:29.918 real 0m4.844s 00:25:29.918 user 0m5.978s 00:25:29.918 sys 0m0.568s 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.918 07:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.918 ************************************ 00:25:29.918 END TEST raid_read_error_test 00:25:29.918 ************************************ 00:25:29.918 07:22:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:29.918 07:22:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:29.918 07:22:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.918 07:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:29.918 ************************************ 00:25:29.918 START TEST raid_write_error_test 00:25:29.918 ************************************ 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.riYOuC86Nr 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73376 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73376 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73376 ']' 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.918 07:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.918 [2024-11-20 07:22:54.028624] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:29.918 [2024-11-20 07:22:54.028839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73376 ] 00:25:30.176 [2024-11-20 07:22:54.231199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.176 [2024-11-20 07:22:54.388516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.435 [2024-11-20 07:22:54.616710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.435 [2024-11-20 07:22:54.616818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 BaseBdev1_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 true 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 [2024-11-20 07:22:55.092957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:31.003 [2024-11-20 07:22:55.093028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.003 [2024-11-20 07:22:55.093061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:31.003 [2024-11-20 07:22:55.093081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.003 BaseBdev1 00:25:31.003 [2024-11-20 07:22:55.096024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.003 [2024-11-20 07:22:55.096076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 BaseBdev2_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 true 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 [2024-11-20 07:22:55.153369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:31.003 [2024-11-20 07:22:55.153445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.003 [2024-11-20 07:22:55.153475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:31.003 [2024-11-20 07:22:55.153493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.003 BaseBdev2 00:25:31.003 [2024-11-20 07:22:55.156417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.003 [2024-11-20 07:22:55.156470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 BaseBdev3_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 true 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.003 [2024-11-20 07:22:55.244420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:31.003 [2024-11-20 07:22:55.244504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.003 [2024-11-20 07:22:55.244538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:31.003 [2024-11-20 07:22:55.244560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.003 [2024-11-20 07:22:55.248069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.003 [2024-11-20 07:22:55.248132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:31.003 BaseBdev3 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.003 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.263 BaseBdev4_malloc 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.263 true 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.263 [2024-11-20 07:22:55.312007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:31.263 [2024-11-20 07:22:55.312088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.263 [2024-11-20 07:22:55.312122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:31.263 [2024-11-20 07:22:55.312143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.263 [2024-11-20 07:22:55.315533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.263 [2024-11-20 07:22:55.315609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:31.263 BaseBdev4 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.263 [2024-11-20 07:22:55.320128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.263 [2024-11-20 07:22:55.323143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:31.263 [2024-11-20 07:22:55.323292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:31.263 [2024-11-20 07:22:55.323421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:31.263 [2024-11-20 07:22:55.323817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:25:31.263 [2024-11-20 07:22:55.323857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:31.263 [2024-11-20 07:22:55.324260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:25:31.263 [2024-11-20 07:22:55.324556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:25:31.263 [2024-11-20 07:22:55.324611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:25:31.263 [2024-11-20 07:22:55.324940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.263 "name": "raid_bdev1", 00:25:31.263 "uuid": "67e37bdf-6712-4e66-a0e3-87fec05fd369", 00:25:31.263 "strip_size_kb": 64, 00:25:31.263 "state": "online", 00:25:31.263 "raid_level": "concat", 00:25:31.263 "superblock": true, 00:25:31.263 "num_base_bdevs": 4, 00:25:31.263 "num_base_bdevs_discovered": 4, 00:25:31.263 "num_base_bdevs_operational": 4, 00:25:31.263 "base_bdevs_list": [ 00:25:31.263 { 00:25:31.263 "name": "BaseBdev1", 00:25:31.263 "uuid": "fa683c29-aeb6-5f67-861e-f601c6593386", 00:25:31.263 "is_configured": true, 00:25:31.263 "data_offset": 2048, 00:25:31.263 "data_size": 63488 00:25:31.263 }, 00:25:31.263 { 00:25:31.263 "name": "BaseBdev2", 00:25:31.263 "uuid": "d55ad3d0-197e-5cec-b09b-90b175a3fe69", 00:25:31.263 "is_configured": true, 00:25:31.263 "data_offset": 2048, 00:25:31.263 "data_size": 63488 00:25:31.263 }, 00:25:31.263 { 00:25:31.263 "name": "BaseBdev3", 00:25:31.263 "uuid": "e8faeaab-3196-5295-ab5d-9e9a267d78af", 00:25:31.263 "is_configured": true, 00:25:31.263 "data_offset": 2048, 00:25:31.263 "data_size": 63488 00:25:31.263 }, 00:25:31.263 { 00:25:31.263 "name": "BaseBdev4", 00:25:31.263 "uuid": "ae2cb6f6-6412-5525-aadb-b55853877ff7", 00:25:31.263 "is_configured": true, 00:25:31.263 "data_offset": 2048, 00:25:31.263 "data_size": 63488 00:25:31.263 } 00:25:31.263 ] 00:25:31.263 }' 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.263 07:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.884 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:31.884 07:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:31.884 [2024-11-20 07:22:55.946482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.820 "name": "raid_bdev1", 00:25:32.820 "uuid": "67e37bdf-6712-4e66-a0e3-87fec05fd369", 00:25:32.820 "strip_size_kb": 64, 00:25:32.820 "state": "online", 00:25:32.820 "raid_level": "concat", 00:25:32.820 "superblock": true, 00:25:32.820 "num_base_bdevs": 4, 00:25:32.820 "num_base_bdevs_discovered": 4, 00:25:32.820 "num_base_bdevs_operational": 4, 00:25:32.820 "base_bdevs_list": [ 00:25:32.820 { 00:25:32.820 "name": "BaseBdev1", 00:25:32.820 "uuid": "fa683c29-aeb6-5f67-861e-f601c6593386", 00:25:32.820 "is_configured": true, 00:25:32.820 "data_offset": 2048, 00:25:32.820 "data_size": 63488 00:25:32.820 }, 00:25:32.820 { 00:25:32.820 "name": "BaseBdev2", 00:25:32.820 "uuid": "d55ad3d0-197e-5cec-b09b-90b175a3fe69", 00:25:32.820 "is_configured": true, 00:25:32.820 "data_offset": 2048, 00:25:32.820 "data_size": 63488 00:25:32.820 }, 00:25:32.820 { 00:25:32.820 "name": "BaseBdev3", 00:25:32.820 "uuid": "e8faeaab-3196-5295-ab5d-9e9a267d78af", 00:25:32.820 "is_configured": true, 00:25:32.820 "data_offset": 2048, 00:25:32.820 "data_size": 63488 00:25:32.820 }, 00:25:32.820 { 00:25:32.820 "name": "BaseBdev4", 00:25:32.820 "uuid": "ae2cb6f6-6412-5525-aadb-b55853877ff7", 00:25:32.820 "is_configured": true, 00:25:32.820 "data_offset": 2048, 00:25:32.820 "data_size": 63488 00:25:32.820 } 00:25:32.820 ] 00:25:32.820 }' 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.820 07:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.079 07:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:33.079 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.079 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.338 [2024-11-20 07:22:57.370458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:33.338 [2024-11-20 07:22:57.370504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.338 [2024-11-20 07:22:57.374646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.338 [2024-11-20 07:22:57.374803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.338 [2024-11-20 07:22:57.374902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.338 [2024-11-20 07:22:57.374938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:25:33.338 { 00:25:33.338 "results": [ 00:25:33.338 { 00:25:33.338 "job": "raid_bdev1", 00:25:33.338 "core_mask": "0x1", 00:25:33.339 "workload": "randrw", 00:25:33.339 "percentage": 50, 00:25:33.339 "status": "finished", 00:25:33.339 "queue_depth": 1, 00:25:33.339 "io_size": 131072, 00:25:33.339 "runtime": 1.421332, 00:25:33.339 "iops": 10317.082849045824, 00:25:33.339 "mibps": 1289.635356130728, 00:25:33.339 "io_failed": 1, 00:25:33.339 "io_timeout": 0, 00:25:33.339 "avg_latency_us": 135.634724111211, 00:25:33.339 "min_latency_us": 37.236363636363635, 00:25:33.339 "max_latency_us": 1854.370909090909 00:25:33.339 } 00:25:33.339 ], 00:25:33.339 "core_count": 1 00:25:33.339 } 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73376 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73376 ']' 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73376 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73376 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.339 killing process with pid 73376 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73376' 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73376 00:25:33.339 07:22:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73376 00:25:33.339 [2024-11-20 07:22:57.409548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.598 [2024-11-20 07:22:57.722152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.riYOuC86Nr 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:25:34.534 00:25:34.534 real 0m4.938s 00:25:34.534 user 0m6.067s 00:25:34.534 sys 0m0.625s 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.534 07:22:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.534 ************************************ 00:25:34.534 END TEST raid_write_error_test 00:25:34.534 ************************************ 00:25:34.792 07:22:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:34.792 07:22:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:34.792 07:22:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:34.793 07:22:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.793 07:22:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.793 ************************************ 00:25:34.793 START TEST raid_state_function_test 00:25:34.793 ************************************ 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73525 00:25:34.793 Process raid pid: 73525 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73525' 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73525 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73525 ']' 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.793 07:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.793 [2024-11-20 07:22:58.991046] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:34.793 [2024-11-20 07:22:58.991235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.051 [2024-11-20 07:22:59.178280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.051 [2024-11-20 07:22:59.310026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.374 [2024-11-20 07:22:59.518780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.374 [2024-11-20 07:22:59.518841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.939 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.939 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:35.939 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:35.939 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.939 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.939 [2024-11-20 07:23:00.019001] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:35.939 [2024-11-20 07:23:00.019066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:35.939 [2024-11-20 07:23:00.019084] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:35.939 [2024-11-20 07:23:00.019100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:35.940 [2024-11-20 07:23:00.019111] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:35.940 [2024-11-20 07:23:00.019126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:35.940 [2024-11-20 07:23:00.019136] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:35.940 [2024-11-20 07:23:00.019149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.940 "name": "Existed_Raid", 00:25:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.940 "strip_size_kb": 0, 00:25:35.940 "state": "configuring", 00:25:35.940 "raid_level": "raid1", 00:25:35.940 "superblock": false, 00:25:35.940 "num_base_bdevs": 4, 00:25:35.940 "num_base_bdevs_discovered": 0, 00:25:35.940 "num_base_bdevs_operational": 4, 00:25:35.940 "base_bdevs_list": [ 00:25:35.940 { 00:25:35.940 "name": "BaseBdev1", 00:25:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.940 "is_configured": false, 00:25:35.940 "data_offset": 0, 00:25:35.940 "data_size": 0 00:25:35.940 }, 00:25:35.940 { 00:25:35.940 "name": "BaseBdev2", 00:25:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.940 "is_configured": false, 00:25:35.940 "data_offset": 0, 00:25:35.940 "data_size": 0 00:25:35.940 }, 00:25:35.940 { 00:25:35.940 "name": "BaseBdev3", 00:25:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.940 "is_configured": false, 00:25:35.940 "data_offset": 0, 00:25:35.940 "data_size": 0 00:25:35.940 }, 00:25:35.940 { 00:25:35.940 "name": "BaseBdev4", 00:25:35.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.940 "is_configured": false, 00:25:35.940 "data_offset": 0, 00:25:35.940 "data_size": 0 00:25:35.940 } 00:25:35.940 ] 00:25:35.940 }' 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.940 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.505 [2024-11-20 07:23:00.523102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:36.505 [2024-11-20 07:23:00.523154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.505 [2024-11-20 07:23:00.531076] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:36.505 [2024-11-20 07:23:00.531130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:36.505 [2024-11-20 07:23:00.531145] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:36.505 [2024-11-20 07:23:00.531161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:36.505 [2024-11-20 07:23:00.531171] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:36.505 [2024-11-20 07:23:00.531200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:36.505 [2024-11-20 07:23:00.531209] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:36.505 [2024-11-20 07:23:00.531223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.505 [2024-11-20 07:23:00.574676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.505 BaseBdev1 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.505 [ 00:25:36.505 { 00:25:36.505 "name": "BaseBdev1", 00:25:36.505 "aliases": [ 00:25:36.505 "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca" 00:25:36.505 ], 00:25:36.505 "product_name": "Malloc disk", 00:25:36.505 "block_size": 512, 00:25:36.505 "num_blocks": 65536, 00:25:36.505 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:36.505 "assigned_rate_limits": { 00:25:36.505 "rw_ios_per_sec": 0, 00:25:36.505 "rw_mbytes_per_sec": 0, 00:25:36.505 "r_mbytes_per_sec": 0, 00:25:36.505 "w_mbytes_per_sec": 0 00:25:36.505 }, 00:25:36.505 "claimed": true, 00:25:36.505 "claim_type": "exclusive_write", 00:25:36.505 "zoned": false, 00:25:36.505 "supported_io_types": { 00:25:36.505 "read": true, 00:25:36.505 "write": true, 00:25:36.505 "unmap": true, 00:25:36.505 "flush": true, 00:25:36.505 "reset": true, 00:25:36.505 "nvme_admin": false, 00:25:36.505 "nvme_io": false, 00:25:36.505 "nvme_io_md": false, 00:25:36.505 "write_zeroes": true, 00:25:36.505 "zcopy": true, 00:25:36.505 "get_zone_info": false, 00:25:36.505 "zone_management": false, 00:25:36.505 "zone_append": false, 00:25:36.505 "compare": false, 00:25:36.505 "compare_and_write": false, 00:25:36.505 "abort": true, 00:25:36.505 "seek_hole": false, 00:25:36.505 "seek_data": false, 00:25:36.505 "copy": true, 00:25:36.505 "nvme_iov_md": false 00:25:36.505 }, 00:25:36.505 "memory_domains": [ 00:25:36.505 { 00:25:36.505 "dma_device_id": "system", 00:25:36.505 "dma_device_type": 1 00:25:36.505 }, 00:25:36.505 { 00:25:36.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.505 "dma_device_type": 2 00:25:36.505 } 00:25:36.505 ], 00:25:36.505 "driver_specific": {} 00:25:36.505 } 00:25:36.505 ] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:36.505 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.506 "name": "Existed_Raid", 00:25:36.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.506 "strip_size_kb": 0, 00:25:36.506 "state": "configuring", 00:25:36.506 "raid_level": "raid1", 00:25:36.506 "superblock": false, 00:25:36.506 "num_base_bdevs": 4, 00:25:36.506 "num_base_bdevs_discovered": 1, 00:25:36.506 "num_base_bdevs_operational": 4, 00:25:36.506 "base_bdevs_list": [ 00:25:36.506 { 00:25:36.506 "name": "BaseBdev1", 00:25:36.506 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:36.506 "is_configured": true, 00:25:36.506 "data_offset": 0, 00:25:36.506 "data_size": 65536 00:25:36.506 }, 00:25:36.506 { 00:25:36.506 "name": "BaseBdev2", 00:25:36.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.506 "is_configured": false, 00:25:36.506 "data_offset": 0, 00:25:36.506 "data_size": 0 00:25:36.506 }, 00:25:36.506 { 00:25:36.506 "name": "BaseBdev3", 00:25:36.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.506 "is_configured": false, 00:25:36.506 "data_offset": 0, 00:25:36.506 "data_size": 0 00:25:36.506 }, 00:25:36.506 { 00:25:36.506 "name": "BaseBdev4", 00:25:36.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.506 "is_configured": false, 00:25:36.506 "data_offset": 0, 00:25:36.506 "data_size": 0 00:25:36.506 } 00:25:36.506 ] 00:25:36.506 }' 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.506 07:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.073 [2024-11-20 07:23:01.098910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:37.073 [2024-11-20 07:23:01.098978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.073 [2024-11-20 07:23:01.106961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:37.073 [2024-11-20 07:23:01.109407] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:37.073 [2024-11-20 07:23:01.109475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:37.073 [2024-11-20 07:23:01.109506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:37.073 [2024-11-20 07:23:01.109523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:37.073 [2024-11-20 07:23:01.109534] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:37.073 [2024-11-20 07:23:01.109548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:37.073 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.074 "name": "Existed_Raid", 00:25:37.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.074 "strip_size_kb": 0, 00:25:37.074 "state": "configuring", 00:25:37.074 "raid_level": "raid1", 00:25:37.074 "superblock": false, 00:25:37.074 "num_base_bdevs": 4, 00:25:37.074 "num_base_bdevs_discovered": 1, 00:25:37.074 "num_base_bdevs_operational": 4, 00:25:37.074 "base_bdevs_list": [ 00:25:37.074 { 00:25:37.074 "name": "BaseBdev1", 00:25:37.074 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:37.074 "is_configured": true, 00:25:37.074 "data_offset": 0, 00:25:37.074 "data_size": 65536 00:25:37.074 }, 00:25:37.074 { 00:25:37.074 "name": "BaseBdev2", 00:25:37.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.074 "is_configured": false, 00:25:37.074 "data_offset": 0, 00:25:37.074 "data_size": 0 00:25:37.074 }, 00:25:37.074 { 00:25:37.074 "name": "BaseBdev3", 00:25:37.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.074 "is_configured": false, 00:25:37.074 "data_offset": 0, 00:25:37.074 "data_size": 0 00:25:37.074 }, 00:25:37.074 { 00:25:37.074 "name": "BaseBdev4", 00:25:37.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.074 "is_configured": false, 00:25:37.074 "data_offset": 0, 00:25:37.074 "data_size": 0 00:25:37.074 } 00:25:37.074 ] 00:25:37.074 }' 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.074 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.642 [2024-11-20 07:23:01.660198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:37.642 BaseBdev2 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.642 [ 00:25:37.642 { 00:25:37.642 "name": "BaseBdev2", 00:25:37.642 "aliases": [ 00:25:37.642 "522fc672-7973-40ae-8aa6-c98334c5aa61" 00:25:37.642 ], 00:25:37.642 "product_name": "Malloc disk", 00:25:37.642 "block_size": 512, 00:25:37.642 "num_blocks": 65536, 00:25:37.642 "uuid": "522fc672-7973-40ae-8aa6-c98334c5aa61", 00:25:37.642 "assigned_rate_limits": { 00:25:37.642 "rw_ios_per_sec": 0, 00:25:37.642 "rw_mbytes_per_sec": 0, 00:25:37.642 "r_mbytes_per_sec": 0, 00:25:37.642 "w_mbytes_per_sec": 0 00:25:37.642 }, 00:25:37.642 "claimed": true, 00:25:37.642 "claim_type": "exclusive_write", 00:25:37.642 "zoned": false, 00:25:37.642 "supported_io_types": { 00:25:37.642 "read": true, 00:25:37.642 "write": true, 00:25:37.642 "unmap": true, 00:25:37.642 "flush": true, 00:25:37.642 "reset": true, 00:25:37.642 "nvme_admin": false, 00:25:37.642 "nvme_io": false, 00:25:37.642 "nvme_io_md": false, 00:25:37.642 "write_zeroes": true, 00:25:37.642 "zcopy": true, 00:25:37.642 "get_zone_info": false, 00:25:37.642 "zone_management": false, 00:25:37.642 "zone_append": false, 00:25:37.642 "compare": false, 00:25:37.642 "compare_and_write": false, 00:25:37.642 "abort": true, 00:25:37.642 "seek_hole": false, 00:25:37.642 "seek_data": false, 00:25:37.642 "copy": true, 00:25:37.642 "nvme_iov_md": false 00:25:37.642 }, 00:25:37.642 "memory_domains": [ 00:25:37.642 { 00:25:37.642 "dma_device_id": "system", 00:25:37.642 "dma_device_type": 1 00:25:37.642 }, 00:25:37.642 { 00:25:37.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.642 "dma_device_type": 2 00:25:37.642 } 00:25:37.642 ], 00:25:37.642 "driver_specific": {} 00:25:37.642 } 00:25:37.642 ] 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.642 "name": "Existed_Raid", 00:25:37.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.642 "strip_size_kb": 0, 00:25:37.642 "state": "configuring", 00:25:37.642 "raid_level": "raid1", 00:25:37.642 "superblock": false, 00:25:37.642 "num_base_bdevs": 4, 00:25:37.642 "num_base_bdevs_discovered": 2, 00:25:37.642 "num_base_bdevs_operational": 4, 00:25:37.642 "base_bdevs_list": [ 00:25:37.642 { 00:25:37.642 "name": "BaseBdev1", 00:25:37.642 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:37.642 "is_configured": true, 00:25:37.642 "data_offset": 0, 00:25:37.642 "data_size": 65536 00:25:37.642 }, 00:25:37.642 { 00:25:37.642 "name": "BaseBdev2", 00:25:37.642 "uuid": "522fc672-7973-40ae-8aa6-c98334c5aa61", 00:25:37.642 "is_configured": true, 00:25:37.642 "data_offset": 0, 00:25:37.642 "data_size": 65536 00:25:37.642 }, 00:25:37.642 { 00:25:37.642 "name": "BaseBdev3", 00:25:37.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.642 "is_configured": false, 00:25:37.642 "data_offset": 0, 00:25:37.642 "data_size": 0 00:25:37.642 }, 00:25:37.642 { 00:25:37.642 "name": "BaseBdev4", 00:25:37.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.642 "is_configured": false, 00:25:37.642 "data_offset": 0, 00:25:37.642 "data_size": 0 00:25:37.642 } 00:25:37.642 ] 00:25:37.642 }' 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.642 07:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.211 [2024-11-20 07:23:02.253012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:38.211 BaseBdev3 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.211 [ 00:25:38.211 { 00:25:38.211 "name": "BaseBdev3", 00:25:38.211 "aliases": [ 00:25:38.211 "904f981c-0ec4-42dd-9d91-a8d7e40e3615" 00:25:38.211 ], 00:25:38.211 "product_name": "Malloc disk", 00:25:38.211 "block_size": 512, 00:25:38.211 "num_blocks": 65536, 00:25:38.211 "uuid": "904f981c-0ec4-42dd-9d91-a8d7e40e3615", 00:25:38.211 "assigned_rate_limits": { 00:25:38.211 "rw_ios_per_sec": 0, 00:25:38.211 "rw_mbytes_per_sec": 0, 00:25:38.211 "r_mbytes_per_sec": 0, 00:25:38.211 "w_mbytes_per_sec": 0 00:25:38.211 }, 00:25:38.211 "claimed": true, 00:25:38.211 "claim_type": "exclusive_write", 00:25:38.211 "zoned": false, 00:25:38.211 "supported_io_types": { 00:25:38.211 "read": true, 00:25:38.211 "write": true, 00:25:38.211 "unmap": true, 00:25:38.211 "flush": true, 00:25:38.211 "reset": true, 00:25:38.211 "nvme_admin": false, 00:25:38.211 "nvme_io": false, 00:25:38.211 "nvme_io_md": false, 00:25:38.211 "write_zeroes": true, 00:25:38.211 "zcopy": true, 00:25:38.211 "get_zone_info": false, 00:25:38.211 "zone_management": false, 00:25:38.211 "zone_append": false, 00:25:38.211 "compare": false, 00:25:38.211 "compare_and_write": false, 00:25:38.211 "abort": true, 00:25:38.211 "seek_hole": false, 00:25:38.211 "seek_data": false, 00:25:38.211 "copy": true, 00:25:38.211 "nvme_iov_md": false 00:25:38.211 }, 00:25:38.211 "memory_domains": [ 00:25:38.211 { 00:25:38.211 "dma_device_id": "system", 00:25:38.211 "dma_device_type": 1 00:25:38.211 }, 00:25:38.211 { 00:25:38.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.211 "dma_device_type": 2 00:25:38.211 } 00:25:38.211 ], 00:25:38.211 "driver_specific": {} 00:25:38.211 } 00:25:38.211 ] 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.211 "name": "Existed_Raid", 00:25:38.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.211 "strip_size_kb": 0, 00:25:38.211 "state": "configuring", 00:25:38.211 "raid_level": "raid1", 00:25:38.211 "superblock": false, 00:25:38.211 "num_base_bdevs": 4, 00:25:38.211 "num_base_bdevs_discovered": 3, 00:25:38.211 "num_base_bdevs_operational": 4, 00:25:38.211 "base_bdevs_list": [ 00:25:38.211 { 00:25:38.211 "name": "BaseBdev1", 00:25:38.211 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:38.211 "is_configured": true, 00:25:38.211 "data_offset": 0, 00:25:38.211 "data_size": 65536 00:25:38.211 }, 00:25:38.211 { 00:25:38.211 "name": "BaseBdev2", 00:25:38.211 "uuid": "522fc672-7973-40ae-8aa6-c98334c5aa61", 00:25:38.211 "is_configured": true, 00:25:38.211 "data_offset": 0, 00:25:38.211 "data_size": 65536 00:25:38.211 }, 00:25:38.211 { 00:25:38.211 "name": "BaseBdev3", 00:25:38.211 "uuid": "904f981c-0ec4-42dd-9d91-a8d7e40e3615", 00:25:38.211 "is_configured": true, 00:25:38.211 "data_offset": 0, 00:25:38.211 "data_size": 65536 00:25:38.211 }, 00:25:38.211 { 00:25:38.211 "name": "BaseBdev4", 00:25:38.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.211 "is_configured": false, 00:25:38.211 "data_offset": 0, 00:25:38.211 "data_size": 0 00:25:38.211 } 00:25:38.211 ] 00:25:38.211 }' 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.211 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.779 [2024-11-20 07:23:02.864447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:38.779 [2024-11-20 07:23:02.864503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:38.779 [2024-11-20 07:23:02.864516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:38.779 [2024-11-20 07:23:02.864911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:38.779 [2024-11-20 07:23:02.865163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:38.779 [2024-11-20 07:23:02.865185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:38.779 [2024-11-20 07:23:02.865500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.779 BaseBdev4 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.779 [ 00:25:38.779 { 00:25:38.779 "name": "BaseBdev4", 00:25:38.779 "aliases": [ 00:25:38.779 "97e0dca3-81cd-48ce-914b-f50516132e56" 00:25:38.779 ], 00:25:38.779 "product_name": "Malloc disk", 00:25:38.779 "block_size": 512, 00:25:38.779 "num_blocks": 65536, 00:25:38.779 "uuid": "97e0dca3-81cd-48ce-914b-f50516132e56", 00:25:38.779 "assigned_rate_limits": { 00:25:38.779 "rw_ios_per_sec": 0, 00:25:38.779 "rw_mbytes_per_sec": 0, 00:25:38.779 "r_mbytes_per_sec": 0, 00:25:38.779 "w_mbytes_per_sec": 0 00:25:38.779 }, 00:25:38.779 "claimed": true, 00:25:38.779 "claim_type": "exclusive_write", 00:25:38.779 "zoned": false, 00:25:38.779 "supported_io_types": { 00:25:38.779 "read": true, 00:25:38.779 "write": true, 00:25:38.779 "unmap": true, 00:25:38.779 "flush": true, 00:25:38.779 "reset": true, 00:25:38.779 "nvme_admin": false, 00:25:38.779 "nvme_io": false, 00:25:38.779 "nvme_io_md": false, 00:25:38.779 "write_zeroes": true, 00:25:38.779 "zcopy": true, 00:25:38.779 "get_zone_info": false, 00:25:38.779 "zone_management": false, 00:25:38.779 "zone_append": false, 00:25:38.779 "compare": false, 00:25:38.779 "compare_and_write": false, 00:25:38.779 "abort": true, 00:25:38.779 "seek_hole": false, 00:25:38.779 "seek_data": false, 00:25:38.779 "copy": true, 00:25:38.779 "nvme_iov_md": false 00:25:38.779 }, 00:25:38.779 "memory_domains": [ 00:25:38.779 { 00:25:38.779 "dma_device_id": "system", 00:25:38.779 "dma_device_type": 1 00:25:38.779 }, 00:25:38.779 { 00:25:38.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.779 "dma_device_type": 2 00:25:38.779 } 00:25:38.779 ], 00:25:38.779 "driver_specific": {} 00:25:38.779 } 00:25:38.779 ] 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.779 "name": "Existed_Raid", 00:25:38.779 "uuid": "f0e93710-1b5d-4482-a387-8a36a8ded6f5", 00:25:38.779 "strip_size_kb": 0, 00:25:38.779 "state": "online", 00:25:38.779 "raid_level": "raid1", 00:25:38.779 "superblock": false, 00:25:38.779 "num_base_bdevs": 4, 00:25:38.779 "num_base_bdevs_discovered": 4, 00:25:38.779 "num_base_bdevs_operational": 4, 00:25:38.779 "base_bdevs_list": [ 00:25:38.779 { 00:25:38.779 "name": "BaseBdev1", 00:25:38.779 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:38.779 "is_configured": true, 00:25:38.779 "data_offset": 0, 00:25:38.779 "data_size": 65536 00:25:38.779 }, 00:25:38.779 { 00:25:38.779 "name": "BaseBdev2", 00:25:38.779 "uuid": "522fc672-7973-40ae-8aa6-c98334c5aa61", 00:25:38.779 "is_configured": true, 00:25:38.779 "data_offset": 0, 00:25:38.779 "data_size": 65536 00:25:38.779 }, 00:25:38.779 { 00:25:38.779 "name": "BaseBdev3", 00:25:38.779 "uuid": "904f981c-0ec4-42dd-9d91-a8d7e40e3615", 00:25:38.779 "is_configured": true, 00:25:38.779 "data_offset": 0, 00:25:38.779 "data_size": 65536 00:25:38.779 }, 00:25:38.779 { 00:25:38.779 "name": "BaseBdev4", 00:25:38.779 "uuid": "97e0dca3-81cd-48ce-914b-f50516132e56", 00:25:38.779 "is_configured": true, 00:25:38.779 "data_offset": 0, 00:25:38.779 "data_size": 65536 00:25:38.779 } 00:25:38.779 ] 00:25:38.779 }' 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.779 07:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 [2024-11-20 07:23:03.425132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.351 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.351 "name": "Existed_Raid", 00:25:39.351 "aliases": [ 00:25:39.351 "f0e93710-1b5d-4482-a387-8a36a8ded6f5" 00:25:39.351 ], 00:25:39.351 "product_name": "Raid Volume", 00:25:39.351 "block_size": 512, 00:25:39.351 "num_blocks": 65536, 00:25:39.351 "uuid": "f0e93710-1b5d-4482-a387-8a36a8ded6f5", 00:25:39.351 "assigned_rate_limits": { 00:25:39.351 "rw_ios_per_sec": 0, 00:25:39.351 "rw_mbytes_per_sec": 0, 00:25:39.351 "r_mbytes_per_sec": 0, 00:25:39.351 "w_mbytes_per_sec": 0 00:25:39.351 }, 00:25:39.351 "claimed": false, 00:25:39.351 "zoned": false, 00:25:39.351 "supported_io_types": { 00:25:39.351 "read": true, 00:25:39.351 "write": true, 00:25:39.351 "unmap": false, 00:25:39.351 "flush": false, 00:25:39.351 "reset": true, 00:25:39.351 "nvme_admin": false, 00:25:39.351 "nvme_io": false, 00:25:39.351 "nvme_io_md": false, 00:25:39.351 "write_zeroes": true, 00:25:39.351 "zcopy": false, 00:25:39.351 "get_zone_info": false, 00:25:39.351 "zone_management": false, 00:25:39.351 "zone_append": false, 00:25:39.351 "compare": false, 00:25:39.351 "compare_and_write": false, 00:25:39.351 "abort": false, 00:25:39.351 "seek_hole": false, 00:25:39.351 "seek_data": false, 00:25:39.351 "copy": false, 00:25:39.351 "nvme_iov_md": false 00:25:39.351 }, 00:25:39.351 "memory_domains": [ 00:25:39.351 { 00:25:39.351 "dma_device_id": "system", 00:25:39.351 "dma_device_type": 1 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.351 "dma_device_type": 2 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "system", 00:25:39.351 "dma_device_type": 1 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.351 "dma_device_type": 2 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "system", 00:25:39.351 "dma_device_type": 1 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.351 "dma_device_type": 2 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "system", 00:25:39.351 "dma_device_type": 1 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.351 "dma_device_type": 2 00:25:39.351 } 00:25:39.351 ], 00:25:39.351 "driver_specific": { 00:25:39.351 "raid": { 00:25:39.351 "uuid": "f0e93710-1b5d-4482-a387-8a36a8ded6f5", 00:25:39.351 "strip_size_kb": 0, 00:25:39.351 "state": "online", 00:25:39.351 "raid_level": "raid1", 00:25:39.351 "superblock": false, 00:25:39.351 "num_base_bdevs": 4, 00:25:39.351 "num_base_bdevs_discovered": 4, 00:25:39.351 "num_base_bdevs_operational": 4, 00:25:39.351 "base_bdevs_list": [ 00:25:39.351 { 00:25:39.351 "name": "BaseBdev1", 00:25:39.351 "uuid": "ff3fea5a-0b85-4ebc-8ea3-0e7c865f70ca", 00:25:39.351 "is_configured": true, 00:25:39.351 "data_offset": 0, 00:25:39.351 "data_size": 65536 00:25:39.351 }, 00:25:39.351 { 00:25:39.351 "name": "BaseBdev2", 00:25:39.351 "uuid": "522fc672-7973-40ae-8aa6-c98334c5aa61", 00:25:39.352 "is_configured": true, 00:25:39.352 "data_offset": 0, 00:25:39.352 "data_size": 65536 00:25:39.352 }, 00:25:39.352 { 00:25:39.352 "name": "BaseBdev3", 00:25:39.352 "uuid": "904f981c-0ec4-42dd-9d91-a8d7e40e3615", 00:25:39.352 "is_configured": true, 00:25:39.352 "data_offset": 0, 00:25:39.352 "data_size": 65536 00:25:39.352 }, 00:25:39.352 { 00:25:39.352 "name": "BaseBdev4", 00:25:39.352 "uuid": "97e0dca3-81cd-48ce-914b-f50516132e56", 00:25:39.352 "is_configured": true, 00:25:39.352 "data_offset": 0, 00:25:39.352 "data_size": 65536 00:25:39.352 } 00:25:39.352 ] 00:25:39.352 } 00:25:39.352 } 00:25:39.352 }' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:39.352 BaseBdev2 00:25:39.352 BaseBdev3 00:25:39.352 BaseBdev4' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.352 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.613 [2024-11-20 07:23:03.784907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.613 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.872 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.872 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:39.872 "name": "Existed_Raid", 00:25:39.872 "uuid": "f0e93710-1b5d-4482-a387-8a36a8ded6f5", 00:25:39.872 "strip_size_kb": 0, 00:25:39.872 "state": "online", 00:25:39.872 "raid_level": "raid1", 00:25:39.872 "superblock": false, 00:25:39.872 "num_base_bdevs": 4, 00:25:39.872 "num_base_bdevs_discovered": 3, 00:25:39.872 "num_base_bdevs_operational": 3, 00:25:39.872 "base_bdevs_list": [ 00:25:39.872 { 00:25:39.872 "name": null, 00:25:39.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.872 "is_configured": false, 00:25:39.872 "data_offset": 0, 00:25:39.872 "data_size": 65536 00:25:39.872 }, 00:25:39.872 { 00:25:39.872 "name": "BaseBdev2", 00:25:39.872 "uuid": "522fc672-7973-40ae-8aa6-c98334c5aa61", 00:25:39.872 "is_configured": true, 00:25:39.872 "data_offset": 0, 00:25:39.872 "data_size": 65536 00:25:39.872 }, 00:25:39.872 { 00:25:39.872 "name": "BaseBdev3", 00:25:39.872 "uuid": "904f981c-0ec4-42dd-9d91-a8d7e40e3615", 00:25:39.872 "is_configured": true, 00:25:39.872 "data_offset": 0, 00:25:39.872 "data_size": 65536 00:25:39.872 }, 00:25:39.872 { 00:25:39.872 "name": "BaseBdev4", 00:25:39.872 "uuid": "97e0dca3-81cd-48ce-914b-f50516132e56", 00:25:39.872 "is_configured": true, 00:25:39.872 "data_offset": 0, 00:25:39.872 "data_size": 65536 00:25:39.872 } 00:25:39.872 ] 00:25:39.872 }' 00:25:39.872 07:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:39.872 07:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.131 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:40.131 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:40.131 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.131 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.131 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.131 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.389 [2024-11-20 07:23:04.464667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:40.389 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:40.390 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.390 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.390 [2024-11-20 07:23:04.611235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:40.648 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 [2024-11-20 07:23:04.758442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:40.649 [2024-11-20 07:23:04.758572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:40.649 [2024-11-20 07:23:04.845908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:40.649 [2024-11-20 07:23:04.845988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:40.649 [2024-11-20 07:23:04.846010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.649 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 BaseBdev2 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 [ 00:25:40.909 { 00:25:40.909 "name": "BaseBdev2", 00:25:40.909 "aliases": [ 00:25:40.909 "f5951f29-b40c-4980-a3f0-caaa119b0c55" 00:25:40.909 ], 00:25:40.909 "product_name": "Malloc disk", 00:25:40.909 "block_size": 512, 00:25:40.909 "num_blocks": 65536, 00:25:40.909 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:40.909 "assigned_rate_limits": { 00:25:40.909 "rw_ios_per_sec": 0, 00:25:40.909 "rw_mbytes_per_sec": 0, 00:25:40.909 "r_mbytes_per_sec": 0, 00:25:40.909 "w_mbytes_per_sec": 0 00:25:40.909 }, 00:25:40.909 "claimed": false, 00:25:40.909 "zoned": false, 00:25:40.909 "supported_io_types": { 00:25:40.909 "read": true, 00:25:40.909 "write": true, 00:25:40.909 "unmap": true, 00:25:40.909 "flush": true, 00:25:40.909 "reset": true, 00:25:40.909 "nvme_admin": false, 00:25:40.909 "nvme_io": false, 00:25:40.909 "nvme_io_md": false, 00:25:40.909 "write_zeroes": true, 00:25:40.909 "zcopy": true, 00:25:40.909 "get_zone_info": false, 00:25:40.909 "zone_management": false, 00:25:40.909 "zone_append": false, 00:25:40.909 "compare": false, 00:25:40.909 "compare_and_write": false, 00:25:40.909 "abort": true, 00:25:40.909 "seek_hole": false, 00:25:40.909 "seek_data": false, 00:25:40.909 "copy": true, 00:25:40.909 "nvme_iov_md": false 00:25:40.909 }, 00:25:40.909 "memory_domains": [ 00:25:40.909 { 00:25:40.909 "dma_device_id": "system", 00:25:40.909 "dma_device_type": 1 00:25:40.909 }, 00:25:40.909 { 00:25:40.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.909 "dma_device_type": 2 00:25:40.909 } 00:25:40.909 ], 00:25:40.909 "driver_specific": {} 00:25:40.909 } 00:25:40.909 ] 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 BaseBdev3 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 [ 00:25:40.909 { 00:25:40.909 "name": "BaseBdev3", 00:25:40.909 "aliases": [ 00:25:40.909 "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f" 00:25:40.909 ], 00:25:40.909 "product_name": "Malloc disk", 00:25:40.909 "block_size": 512, 00:25:40.909 "num_blocks": 65536, 00:25:40.909 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:40.909 "assigned_rate_limits": { 00:25:40.909 "rw_ios_per_sec": 0, 00:25:40.909 "rw_mbytes_per_sec": 0, 00:25:40.909 "r_mbytes_per_sec": 0, 00:25:40.909 "w_mbytes_per_sec": 0 00:25:40.909 }, 00:25:40.909 "claimed": false, 00:25:40.909 "zoned": false, 00:25:40.909 "supported_io_types": { 00:25:40.909 "read": true, 00:25:40.909 "write": true, 00:25:40.909 "unmap": true, 00:25:40.909 "flush": true, 00:25:40.909 "reset": true, 00:25:40.909 "nvme_admin": false, 00:25:40.909 "nvme_io": false, 00:25:40.909 "nvme_io_md": false, 00:25:40.909 "write_zeroes": true, 00:25:40.909 "zcopy": true, 00:25:40.909 "get_zone_info": false, 00:25:40.909 "zone_management": false, 00:25:40.909 "zone_append": false, 00:25:40.909 "compare": false, 00:25:40.909 "compare_and_write": false, 00:25:40.909 "abort": true, 00:25:40.909 "seek_hole": false, 00:25:40.909 "seek_data": false, 00:25:40.909 "copy": true, 00:25:40.909 "nvme_iov_md": false 00:25:40.909 }, 00:25:40.909 "memory_domains": [ 00:25:40.909 { 00:25:40.909 "dma_device_id": "system", 00:25:40.909 "dma_device_type": 1 00:25:40.909 }, 00:25:40.909 { 00:25:40.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.909 "dma_device_type": 2 00:25:40.909 } 00:25:40.909 ], 00:25:40.909 "driver_specific": {} 00:25:40.909 } 00:25:40.909 ] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 BaseBdev4 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 [ 00:25:40.909 { 00:25:40.909 "name": "BaseBdev4", 00:25:40.909 "aliases": [ 00:25:40.909 "8b190410-d772-4c50-bd90-5c3075d5fbc1" 00:25:40.909 ], 00:25:40.909 "product_name": "Malloc disk", 00:25:40.909 "block_size": 512, 00:25:40.909 "num_blocks": 65536, 00:25:40.909 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:40.909 "assigned_rate_limits": { 00:25:40.909 "rw_ios_per_sec": 0, 00:25:40.909 "rw_mbytes_per_sec": 0, 00:25:40.909 "r_mbytes_per_sec": 0, 00:25:40.909 "w_mbytes_per_sec": 0 00:25:40.909 }, 00:25:40.909 "claimed": false, 00:25:40.909 "zoned": false, 00:25:40.909 "supported_io_types": { 00:25:40.909 "read": true, 00:25:40.909 "write": true, 00:25:40.910 "unmap": true, 00:25:40.910 "flush": true, 00:25:40.910 "reset": true, 00:25:40.910 "nvme_admin": false, 00:25:40.910 "nvme_io": false, 00:25:40.910 "nvme_io_md": false, 00:25:40.910 "write_zeroes": true, 00:25:40.910 "zcopy": true, 00:25:40.910 "get_zone_info": false, 00:25:40.910 "zone_management": false, 00:25:40.910 "zone_append": false, 00:25:40.910 "compare": false, 00:25:40.910 "compare_and_write": false, 00:25:40.910 "abort": true, 00:25:40.910 "seek_hole": false, 00:25:40.910 "seek_data": false, 00:25:40.910 "copy": true, 00:25:40.910 "nvme_iov_md": false 00:25:40.910 }, 00:25:40.910 "memory_domains": [ 00:25:40.910 { 00:25:40.910 "dma_device_id": "system", 00:25:40.910 "dma_device_type": 1 00:25:40.910 }, 00:25:40.910 { 00:25:40.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.910 "dma_device_type": 2 00:25:40.910 } 00:25:40.910 ], 00:25:40.910 "driver_specific": {} 00:25:40.910 } 00:25:40.910 ] 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.910 [2024-11-20 07:23:05.122458] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.910 [2024-11-20 07:23:05.122677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.910 [2024-11-20 07:23:05.122838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:40.910 [2024-11-20 07:23:05.125317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:40.910 [2024-11-20 07:23:05.125492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.910 "name": "Existed_Raid", 00:25:40.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.910 "strip_size_kb": 0, 00:25:40.910 "state": "configuring", 00:25:40.910 "raid_level": "raid1", 00:25:40.910 "superblock": false, 00:25:40.910 "num_base_bdevs": 4, 00:25:40.910 "num_base_bdevs_discovered": 3, 00:25:40.910 "num_base_bdevs_operational": 4, 00:25:40.910 "base_bdevs_list": [ 00:25:40.910 { 00:25:40.910 "name": "BaseBdev1", 00:25:40.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.910 "is_configured": false, 00:25:40.910 "data_offset": 0, 00:25:40.910 "data_size": 0 00:25:40.910 }, 00:25:40.910 { 00:25:40.910 "name": "BaseBdev2", 00:25:40.910 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:40.910 "is_configured": true, 00:25:40.910 "data_offset": 0, 00:25:40.910 "data_size": 65536 00:25:40.910 }, 00:25:40.910 { 00:25:40.910 "name": "BaseBdev3", 00:25:40.910 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:40.910 "is_configured": true, 00:25:40.910 "data_offset": 0, 00:25:40.910 "data_size": 65536 00:25:40.910 }, 00:25:40.910 { 00:25:40.910 "name": "BaseBdev4", 00:25:40.910 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:40.910 "is_configured": true, 00:25:40.910 "data_offset": 0, 00:25:40.910 "data_size": 65536 00:25:40.910 } 00:25:40.910 ] 00:25:40.910 }' 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.910 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.477 [2024-11-20 07:23:05.674630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.477 "name": "Existed_Raid", 00:25:41.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.477 "strip_size_kb": 0, 00:25:41.477 "state": "configuring", 00:25:41.477 "raid_level": "raid1", 00:25:41.477 "superblock": false, 00:25:41.477 "num_base_bdevs": 4, 00:25:41.477 "num_base_bdevs_discovered": 2, 00:25:41.477 "num_base_bdevs_operational": 4, 00:25:41.477 "base_bdevs_list": [ 00:25:41.477 { 00:25:41.477 "name": "BaseBdev1", 00:25:41.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.477 "is_configured": false, 00:25:41.477 "data_offset": 0, 00:25:41.477 "data_size": 0 00:25:41.477 }, 00:25:41.477 { 00:25:41.477 "name": null, 00:25:41.477 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:41.477 "is_configured": false, 00:25:41.477 "data_offset": 0, 00:25:41.477 "data_size": 65536 00:25:41.477 }, 00:25:41.477 { 00:25:41.477 "name": "BaseBdev3", 00:25:41.477 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:41.477 "is_configured": true, 00:25:41.477 "data_offset": 0, 00:25:41.477 "data_size": 65536 00:25:41.477 }, 00:25:41.477 { 00:25:41.477 "name": "BaseBdev4", 00:25:41.477 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:41.477 "is_configured": true, 00:25:41.477 "data_offset": 0, 00:25:41.477 "data_size": 65536 00:25:41.477 } 00:25:41.477 ] 00:25:41.477 }' 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.477 07:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.044 [2024-11-20 07:23:06.324825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:42.044 BaseBdev1 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:42.044 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:42.045 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:42.045 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:42.045 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.045 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.303 [ 00:25:42.303 { 00:25:42.303 "name": "BaseBdev1", 00:25:42.303 "aliases": [ 00:25:42.303 "31de8473-90e1-4696-8762-a5d5d8789299" 00:25:42.303 ], 00:25:42.303 "product_name": "Malloc disk", 00:25:42.303 "block_size": 512, 00:25:42.303 "num_blocks": 65536, 00:25:42.303 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:42.303 "assigned_rate_limits": { 00:25:42.303 "rw_ios_per_sec": 0, 00:25:42.303 "rw_mbytes_per_sec": 0, 00:25:42.303 "r_mbytes_per_sec": 0, 00:25:42.303 "w_mbytes_per_sec": 0 00:25:42.303 }, 00:25:42.303 "claimed": true, 00:25:42.303 "claim_type": "exclusive_write", 00:25:42.303 "zoned": false, 00:25:42.303 "supported_io_types": { 00:25:42.303 "read": true, 00:25:42.303 "write": true, 00:25:42.303 "unmap": true, 00:25:42.303 "flush": true, 00:25:42.303 "reset": true, 00:25:42.303 "nvme_admin": false, 00:25:42.303 "nvme_io": false, 00:25:42.303 "nvme_io_md": false, 00:25:42.303 "write_zeroes": true, 00:25:42.303 "zcopy": true, 00:25:42.303 "get_zone_info": false, 00:25:42.303 "zone_management": false, 00:25:42.303 "zone_append": false, 00:25:42.303 "compare": false, 00:25:42.303 "compare_and_write": false, 00:25:42.303 "abort": true, 00:25:42.303 "seek_hole": false, 00:25:42.303 "seek_data": false, 00:25:42.303 "copy": true, 00:25:42.303 "nvme_iov_md": false 00:25:42.303 }, 00:25:42.303 "memory_domains": [ 00:25:42.303 { 00:25:42.303 "dma_device_id": "system", 00:25:42.303 "dma_device_type": 1 00:25:42.303 }, 00:25:42.303 { 00:25:42.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.303 "dma_device_type": 2 00:25:42.303 } 00:25:42.303 ], 00:25:42.303 "driver_specific": {} 00:25:42.303 } 00:25:42.303 ] 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.303 "name": "Existed_Raid", 00:25:42.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.303 "strip_size_kb": 0, 00:25:42.303 "state": "configuring", 00:25:42.303 "raid_level": "raid1", 00:25:42.303 "superblock": false, 00:25:42.303 "num_base_bdevs": 4, 00:25:42.303 "num_base_bdevs_discovered": 3, 00:25:42.303 "num_base_bdevs_operational": 4, 00:25:42.303 "base_bdevs_list": [ 00:25:42.303 { 00:25:42.303 "name": "BaseBdev1", 00:25:42.303 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:42.303 "is_configured": true, 00:25:42.303 "data_offset": 0, 00:25:42.303 "data_size": 65536 00:25:42.303 }, 00:25:42.303 { 00:25:42.303 "name": null, 00:25:42.303 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:42.303 "is_configured": false, 00:25:42.303 "data_offset": 0, 00:25:42.303 "data_size": 65536 00:25:42.303 }, 00:25:42.303 { 00:25:42.303 "name": "BaseBdev3", 00:25:42.303 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:42.303 "is_configured": true, 00:25:42.303 "data_offset": 0, 00:25:42.303 "data_size": 65536 00:25:42.303 }, 00:25:42.303 { 00:25:42.303 "name": "BaseBdev4", 00:25:42.303 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:42.303 "is_configured": true, 00:25:42.303 "data_offset": 0, 00:25:42.303 "data_size": 65536 00:25:42.303 } 00:25:42.303 ] 00:25:42.303 }' 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.303 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.906 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.906 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.906 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.906 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.907 [2024-11-20 07:23:06.945121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.907 07:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.907 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.907 "name": "Existed_Raid", 00:25:42.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.907 "strip_size_kb": 0, 00:25:42.907 "state": "configuring", 00:25:42.907 "raid_level": "raid1", 00:25:42.907 "superblock": false, 00:25:42.907 "num_base_bdevs": 4, 00:25:42.907 "num_base_bdevs_discovered": 2, 00:25:42.907 "num_base_bdevs_operational": 4, 00:25:42.907 "base_bdevs_list": [ 00:25:42.907 { 00:25:42.907 "name": "BaseBdev1", 00:25:42.907 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:42.907 "is_configured": true, 00:25:42.907 "data_offset": 0, 00:25:42.907 "data_size": 65536 00:25:42.907 }, 00:25:42.907 { 00:25:42.907 "name": null, 00:25:42.907 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:42.907 "is_configured": false, 00:25:42.907 "data_offset": 0, 00:25:42.907 "data_size": 65536 00:25:42.907 }, 00:25:42.907 { 00:25:42.907 "name": null, 00:25:42.907 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:42.907 "is_configured": false, 00:25:42.907 "data_offset": 0, 00:25:42.907 "data_size": 65536 00:25:42.907 }, 00:25:42.907 { 00:25:42.907 "name": "BaseBdev4", 00:25:42.907 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:42.907 "is_configured": true, 00:25:42.907 "data_offset": 0, 00:25:42.907 "data_size": 65536 00:25:42.907 } 00:25:42.907 ] 00:25:42.907 }' 00:25:42.907 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.907 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.486 [2024-11-20 07:23:07.525278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.486 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:43.486 "name": "Existed_Raid", 00:25:43.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.486 "strip_size_kb": 0, 00:25:43.486 "state": "configuring", 00:25:43.486 "raid_level": "raid1", 00:25:43.486 "superblock": false, 00:25:43.486 "num_base_bdevs": 4, 00:25:43.486 "num_base_bdevs_discovered": 3, 00:25:43.486 "num_base_bdevs_operational": 4, 00:25:43.486 "base_bdevs_list": [ 00:25:43.486 { 00:25:43.486 "name": "BaseBdev1", 00:25:43.486 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:43.486 "is_configured": true, 00:25:43.486 "data_offset": 0, 00:25:43.486 "data_size": 65536 00:25:43.487 }, 00:25:43.487 { 00:25:43.487 "name": null, 00:25:43.487 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:43.487 "is_configured": false, 00:25:43.487 "data_offset": 0, 00:25:43.487 "data_size": 65536 00:25:43.487 }, 00:25:43.487 { 00:25:43.487 "name": "BaseBdev3", 00:25:43.487 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:43.487 "is_configured": true, 00:25:43.487 "data_offset": 0, 00:25:43.487 "data_size": 65536 00:25:43.487 }, 00:25:43.487 { 00:25:43.487 "name": "BaseBdev4", 00:25:43.487 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:43.487 "is_configured": true, 00:25:43.487 "data_offset": 0, 00:25:43.487 "data_size": 65536 00:25:43.487 } 00:25:43.487 ] 00:25:43.487 }' 00:25:43.487 07:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:43.487 07:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.745 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.745 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:43.745 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.745 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.004 [2024-11-20 07:23:08.073506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.004 "name": "Existed_Raid", 00:25:44.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.004 "strip_size_kb": 0, 00:25:44.004 "state": "configuring", 00:25:44.004 "raid_level": "raid1", 00:25:44.004 "superblock": false, 00:25:44.004 "num_base_bdevs": 4, 00:25:44.004 "num_base_bdevs_discovered": 2, 00:25:44.004 "num_base_bdevs_operational": 4, 00:25:44.004 "base_bdevs_list": [ 00:25:44.004 { 00:25:44.004 "name": null, 00:25:44.004 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:44.004 "is_configured": false, 00:25:44.004 "data_offset": 0, 00:25:44.004 "data_size": 65536 00:25:44.004 }, 00:25:44.004 { 00:25:44.004 "name": null, 00:25:44.004 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:44.004 "is_configured": false, 00:25:44.004 "data_offset": 0, 00:25:44.004 "data_size": 65536 00:25:44.004 }, 00:25:44.004 { 00:25:44.004 "name": "BaseBdev3", 00:25:44.004 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:44.004 "is_configured": true, 00:25:44.004 "data_offset": 0, 00:25:44.004 "data_size": 65536 00:25:44.004 }, 00:25:44.004 { 00:25:44.004 "name": "BaseBdev4", 00:25:44.004 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:44.004 "is_configured": true, 00:25:44.004 "data_offset": 0, 00:25:44.004 "data_size": 65536 00:25:44.004 } 00:25:44.004 ] 00:25:44.004 }' 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.004 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.572 [2024-11-20 07:23:08.760575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.572 "name": "Existed_Raid", 00:25:44.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.572 "strip_size_kb": 0, 00:25:44.572 "state": "configuring", 00:25:44.572 "raid_level": "raid1", 00:25:44.572 "superblock": false, 00:25:44.572 "num_base_bdevs": 4, 00:25:44.572 "num_base_bdevs_discovered": 3, 00:25:44.572 "num_base_bdevs_operational": 4, 00:25:44.572 "base_bdevs_list": [ 00:25:44.572 { 00:25:44.572 "name": null, 00:25:44.572 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:44.572 "is_configured": false, 00:25:44.572 "data_offset": 0, 00:25:44.572 "data_size": 65536 00:25:44.572 }, 00:25:44.572 { 00:25:44.572 "name": "BaseBdev2", 00:25:44.572 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:44.572 "is_configured": true, 00:25:44.572 "data_offset": 0, 00:25:44.572 "data_size": 65536 00:25:44.572 }, 00:25:44.572 { 00:25:44.572 "name": "BaseBdev3", 00:25:44.572 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:44.572 "is_configured": true, 00:25:44.572 "data_offset": 0, 00:25:44.572 "data_size": 65536 00:25:44.572 }, 00:25:44.572 { 00:25:44.572 "name": "BaseBdev4", 00:25:44.572 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:44.572 "is_configured": true, 00:25:44.572 "data_offset": 0, 00:25:44.572 "data_size": 65536 00:25:44.572 } 00:25:44.572 ] 00:25:44.572 }' 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.572 07:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 31de8473-90e1-4696-8762-a5d5d8789299 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.139 [2024-11-20 07:23:09.422490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:45.139 NewBaseBdev 00:25:45.139 [2024-11-20 07:23:09.422819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:45.139 [2024-11-20 07:23:09.422863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:45.139 [2024-11-20 07:23:09.423235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:45.139 [2024-11-20 07:23:09.423464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:45.139 [2024-11-20 07:23:09.423480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:45.139 [2024-11-20 07:23:09.423821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.139 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.398 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.398 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:45.398 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.398 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.398 [ 00:25:45.398 { 00:25:45.398 "name": "NewBaseBdev", 00:25:45.398 "aliases": [ 00:25:45.398 "31de8473-90e1-4696-8762-a5d5d8789299" 00:25:45.398 ], 00:25:45.398 "product_name": "Malloc disk", 00:25:45.398 "block_size": 512, 00:25:45.398 "num_blocks": 65536, 00:25:45.398 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:45.398 "assigned_rate_limits": { 00:25:45.398 "rw_ios_per_sec": 0, 00:25:45.398 "rw_mbytes_per_sec": 0, 00:25:45.398 "r_mbytes_per_sec": 0, 00:25:45.398 "w_mbytes_per_sec": 0 00:25:45.398 }, 00:25:45.398 "claimed": true, 00:25:45.398 "claim_type": "exclusive_write", 00:25:45.398 "zoned": false, 00:25:45.398 "supported_io_types": { 00:25:45.398 "read": true, 00:25:45.398 "write": true, 00:25:45.398 "unmap": true, 00:25:45.399 "flush": true, 00:25:45.399 "reset": true, 00:25:45.399 "nvme_admin": false, 00:25:45.399 "nvme_io": false, 00:25:45.399 "nvme_io_md": false, 00:25:45.399 "write_zeroes": true, 00:25:45.399 "zcopy": true, 00:25:45.399 "get_zone_info": false, 00:25:45.399 "zone_management": false, 00:25:45.399 "zone_append": false, 00:25:45.399 "compare": false, 00:25:45.399 "compare_and_write": false, 00:25:45.399 "abort": true, 00:25:45.399 "seek_hole": false, 00:25:45.399 "seek_data": false, 00:25:45.399 "copy": true, 00:25:45.399 "nvme_iov_md": false 00:25:45.399 }, 00:25:45.399 "memory_domains": [ 00:25:45.399 { 00:25:45.399 "dma_device_id": "system", 00:25:45.399 "dma_device_type": 1 00:25:45.399 }, 00:25:45.399 { 00:25:45.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.399 "dma_device_type": 2 00:25:45.399 } 00:25:45.399 ], 00:25:45.399 "driver_specific": {} 00:25:45.399 } 00:25:45.399 ] 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.399 "name": "Existed_Raid", 00:25:45.399 "uuid": "3cabe8de-68e1-43e7-840f-fe3b2db5e04a", 00:25:45.399 "strip_size_kb": 0, 00:25:45.399 "state": "online", 00:25:45.399 "raid_level": "raid1", 00:25:45.399 "superblock": false, 00:25:45.399 "num_base_bdevs": 4, 00:25:45.399 "num_base_bdevs_discovered": 4, 00:25:45.399 "num_base_bdevs_operational": 4, 00:25:45.399 "base_bdevs_list": [ 00:25:45.399 { 00:25:45.399 "name": "NewBaseBdev", 00:25:45.399 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:45.399 "is_configured": true, 00:25:45.399 "data_offset": 0, 00:25:45.399 "data_size": 65536 00:25:45.399 }, 00:25:45.399 { 00:25:45.399 "name": "BaseBdev2", 00:25:45.399 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:45.399 "is_configured": true, 00:25:45.399 "data_offset": 0, 00:25:45.399 "data_size": 65536 00:25:45.399 }, 00:25:45.399 { 00:25:45.399 "name": "BaseBdev3", 00:25:45.399 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:45.399 "is_configured": true, 00:25:45.399 "data_offset": 0, 00:25:45.399 "data_size": 65536 00:25:45.399 }, 00:25:45.399 { 00:25:45.399 "name": "BaseBdev4", 00:25:45.399 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:45.399 "is_configured": true, 00:25:45.399 "data_offset": 0, 00:25:45.399 "data_size": 65536 00:25:45.399 } 00:25:45.399 ] 00:25:45.399 }' 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.399 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.966 [2024-11-20 07:23:09.971392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:45.966 07:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.966 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.966 "name": "Existed_Raid", 00:25:45.966 "aliases": [ 00:25:45.966 "3cabe8de-68e1-43e7-840f-fe3b2db5e04a" 00:25:45.966 ], 00:25:45.966 "product_name": "Raid Volume", 00:25:45.966 "block_size": 512, 00:25:45.966 "num_blocks": 65536, 00:25:45.966 "uuid": "3cabe8de-68e1-43e7-840f-fe3b2db5e04a", 00:25:45.966 "assigned_rate_limits": { 00:25:45.966 "rw_ios_per_sec": 0, 00:25:45.966 "rw_mbytes_per_sec": 0, 00:25:45.966 "r_mbytes_per_sec": 0, 00:25:45.967 "w_mbytes_per_sec": 0 00:25:45.967 }, 00:25:45.967 "claimed": false, 00:25:45.967 "zoned": false, 00:25:45.967 "supported_io_types": { 00:25:45.967 "read": true, 00:25:45.967 "write": true, 00:25:45.967 "unmap": false, 00:25:45.967 "flush": false, 00:25:45.967 "reset": true, 00:25:45.967 "nvme_admin": false, 00:25:45.967 "nvme_io": false, 00:25:45.967 "nvme_io_md": false, 00:25:45.967 "write_zeroes": true, 00:25:45.967 "zcopy": false, 00:25:45.967 "get_zone_info": false, 00:25:45.967 "zone_management": false, 00:25:45.967 "zone_append": false, 00:25:45.967 "compare": false, 00:25:45.967 "compare_and_write": false, 00:25:45.967 "abort": false, 00:25:45.967 "seek_hole": false, 00:25:45.967 "seek_data": false, 00:25:45.967 "copy": false, 00:25:45.967 "nvme_iov_md": false 00:25:45.967 }, 00:25:45.967 "memory_domains": [ 00:25:45.967 { 00:25:45.967 "dma_device_id": "system", 00:25:45.967 "dma_device_type": 1 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.967 "dma_device_type": 2 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "system", 00:25:45.967 "dma_device_type": 1 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.967 "dma_device_type": 2 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "system", 00:25:45.967 "dma_device_type": 1 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.967 "dma_device_type": 2 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "system", 00:25:45.967 "dma_device_type": 1 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.967 "dma_device_type": 2 00:25:45.967 } 00:25:45.967 ], 00:25:45.967 "driver_specific": { 00:25:45.967 "raid": { 00:25:45.967 "uuid": "3cabe8de-68e1-43e7-840f-fe3b2db5e04a", 00:25:45.967 "strip_size_kb": 0, 00:25:45.967 "state": "online", 00:25:45.967 "raid_level": "raid1", 00:25:45.967 "superblock": false, 00:25:45.967 "num_base_bdevs": 4, 00:25:45.967 "num_base_bdevs_discovered": 4, 00:25:45.967 "num_base_bdevs_operational": 4, 00:25:45.967 "base_bdevs_list": [ 00:25:45.967 { 00:25:45.967 "name": "NewBaseBdev", 00:25:45.967 "uuid": "31de8473-90e1-4696-8762-a5d5d8789299", 00:25:45.967 "is_configured": true, 00:25:45.967 "data_offset": 0, 00:25:45.967 "data_size": 65536 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "name": "BaseBdev2", 00:25:45.967 "uuid": "f5951f29-b40c-4980-a3f0-caaa119b0c55", 00:25:45.967 "is_configured": true, 00:25:45.967 "data_offset": 0, 00:25:45.967 "data_size": 65536 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "name": "BaseBdev3", 00:25:45.967 "uuid": "bdfda9f1-5af0-4a3d-b94d-031f97dcfd0f", 00:25:45.967 "is_configured": true, 00:25:45.967 "data_offset": 0, 00:25:45.967 "data_size": 65536 00:25:45.967 }, 00:25:45.967 { 00:25:45.967 "name": "BaseBdev4", 00:25:45.967 "uuid": "8b190410-d772-4c50-bd90-5c3075d5fbc1", 00:25:45.967 "is_configured": true, 00:25:45.967 "data_offset": 0, 00:25:45.967 "data_size": 65536 00:25:45.967 } 00:25:45.967 ] 00:25:45.967 } 00:25:45.967 } 00:25:45.967 }' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:45.967 BaseBdev2 00:25:45.967 BaseBdev3 00:25:45.967 BaseBdev4' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.967 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.226 [2024-11-20 07:23:10.339053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:46.226 [2024-11-20 07:23:10.339397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:46.226 [2024-11-20 07:23:10.339664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.226 [2024-11-20 07:23:10.340130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:46.226 [2024-11-20 07:23:10.340161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73525 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73525 ']' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73525 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73525 00:25:46.226 killing process with pid 73525 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73525' 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73525 00:25:46.226 [2024-11-20 07:23:10.376951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:46.226 07:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73525 00:25:46.793 [2024-11-20 07:23:10.774764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:47.756 ************************************ 00:25:47.756 END TEST raid_state_function_test 00:25:47.756 ************************************ 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:47.756 00:25:47.756 real 0m13.060s 00:25:47.756 user 0m21.591s 00:25:47.756 sys 0m1.808s 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.756 07:23:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:47.756 07:23:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:47.756 07:23:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.756 07:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:47.756 ************************************ 00:25:47.756 START TEST raid_state_function_test_sb 00:25:47.756 ************************************ 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:47.756 Process raid pid: 74213 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74213 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74213' 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74213 00:25:47.756 07:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:47.757 07:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74213 ']' 00:25:47.757 07:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.757 07:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.757 07:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.757 07:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.757 07:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.016 [2024-11-20 07:23:12.107773] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:48.016 [2024-11-20 07:23:12.107982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.016 [2024-11-20 07:23:12.296047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.274 [2024-11-20 07:23:12.452488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.533 [2024-11-20 07:23:12.687530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:48.533 [2024-11-20 07:23:12.687633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.100 [2024-11-20 07:23:13.111304] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:49.100 [2024-11-20 07:23:13.111738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:49.100 [2024-11-20 07:23:13.111890] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:49.100 [2024-11-20 07:23:13.111966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:49.100 [2024-11-20 07:23:13.112091] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:49.100 [2024-11-20 07:23:13.112164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:49.100 [2024-11-20 07:23:13.112215] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:49.100 [2024-11-20 07:23:13.112352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.100 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.101 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.101 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.101 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.101 "name": "Existed_Raid", 00:25:49.101 "uuid": "9fcf21bf-a2bf-42b6-8c2f-cfe2bae26916", 00:25:49.101 "strip_size_kb": 0, 00:25:49.101 "state": "configuring", 00:25:49.101 "raid_level": "raid1", 00:25:49.101 "superblock": true, 00:25:49.101 "num_base_bdevs": 4, 00:25:49.101 "num_base_bdevs_discovered": 0, 00:25:49.101 "num_base_bdevs_operational": 4, 00:25:49.101 "base_bdevs_list": [ 00:25:49.101 { 00:25:49.101 "name": "BaseBdev1", 00:25:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.101 "is_configured": false, 00:25:49.101 "data_offset": 0, 00:25:49.101 "data_size": 0 00:25:49.101 }, 00:25:49.101 { 00:25:49.101 "name": "BaseBdev2", 00:25:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.101 "is_configured": false, 00:25:49.101 "data_offset": 0, 00:25:49.101 "data_size": 0 00:25:49.101 }, 00:25:49.101 { 00:25:49.101 "name": "BaseBdev3", 00:25:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.101 "is_configured": false, 00:25:49.101 "data_offset": 0, 00:25:49.101 "data_size": 0 00:25:49.101 }, 00:25:49.101 { 00:25:49.101 "name": "BaseBdev4", 00:25:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.101 "is_configured": false, 00:25:49.101 "data_offset": 0, 00:25:49.101 "data_size": 0 00:25:49.101 } 00:25:49.101 ] 00:25:49.101 }' 00:25:49.101 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.101 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 [2024-11-20 07:23:13.663427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:49.668 [2024-11-20 07:23:13.663545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 [2024-11-20 07:23:13.675350] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:49.668 [2024-11-20 07:23:13.676071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:49.668 [2024-11-20 07:23:13.676109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:49.668 [2024-11-20 07:23:13.676215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:49.668 [2024-11-20 07:23:13.676235] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:49.668 [2024-11-20 07:23:13.676328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:49.668 [2024-11-20 07:23:13.676347] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:49.668 [2024-11-20 07:23:13.676437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 [2024-11-20 07:23:13.727414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:49.668 BaseBdev1 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.668 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.668 [ 00:25:49.668 { 00:25:49.668 "name": "BaseBdev1", 00:25:49.668 "aliases": [ 00:25:49.668 "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9" 00:25:49.668 ], 00:25:49.668 "product_name": "Malloc disk", 00:25:49.668 "block_size": 512, 00:25:49.668 "num_blocks": 65536, 00:25:49.668 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:49.669 "assigned_rate_limits": { 00:25:49.669 "rw_ios_per_sec": 0, 00:25:49.669 "rw_mbytes_per_sec": 0, 00:25:49.669 "r_mbytes_per_sec": 0, 00:25:49.669 "w_mbytes_per_sec": 0 00:25:49.669 }, 00:25:49.669 "claimed": true, 00:25:49.669 "claim_type": "exclusive_write", 00:25:49.669 "zoned": false, 00:25:49.669 "supported_io_types": { 00:25:49.669 "read": true, 00:25:49.669 "write": true, 00:25:49.669 "unmap": true, 00:25:49.669 "flush": true, 00:25:49.669 "reset": true, 00:25:49.669 "nvme_admin": false, 00:25:49.669 "nvme_io": false, 00:25:49.669 "nvme_io_md": false, 00:25:49.669 "write_zeroes": true, 00:25:49.669 "zcopy": true, 00:25:49.669 "get_zone_info": false, 00:25:49.669 "zone_management": false, 00:25:49.669 "zone_append": false, 00:25:49.669 "compare": false, 00:25:49.669 "compare_and_write": false, 00:25:49.669 "abort": true, 00:25:49.669 "seek_hole": false, 00:25:49.669 "seek_data": false, 00:25:49.669 "copy": true, 00:25:49.669 "nvme_iov_md": false 00:25:49.669 }, 00:25:49.669 "memory_domains": [ 00:25:49.669 { 00:25:49.669 "dma_device_id": "system", 00:25:49.669 "dma_device_type": 1 00:25:49.669 }, 00:25:49.669 { 00:25:49.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.669 "dma_device_type": 2 00:25:49.669 } 00:25:49.669 ], 00:25:49.669 "driver_specific": {} 00:25:49.669 } 00:25:49.669 ] 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.669 "name": "Existed_Raid", 00:25:49.669 "uuid": "6d60e1ce-81c4-4ab8-84b8-00564945d36c", 00:25:49.669 "strip_size_kb": 0, 00:25:49.669 "state": "configuring", 00:25:49.669 "raid_level": "raid1", 00:25:49.669 "superblock": true, 00:25:49.669 "num_base_bdevs": 4, 00:25:49.669 "num_base_bdevs_discovered": 1, 00:25:49.669 "num_base_bdevs_operational": 4, 00:25:49.669 "base_bdevs_list": [ 00:25:49.669 { 00:25:49.669 "name": "BaseBdev1", 00:25:49.669 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:49.669 "is_configured": true, 00:25:49.669 "data_offset": 2048, 00:25:49.669 "data_size": 63488 00:25:49.669 }, 00:25:49.669 { 00:25:49.669 "name": "BaseBdev2", 00:25:49.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.669 "is_configured": false, 00:25:49.669 "data_offset": 0, 00:25:49.669 "data_size": 0 00:25:49.669 }, 00:25:49.669 { 00:25:49.669 "name": "BaseBdev3", 00:25:49.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.669 "is_configured": false, 00:25:49.669 "data_offset": 0, 00:25:49.669 "data_size": 0 00:25:49.669 }, 00:25:49.669 { 00:25:49.669 "name": "BaseBdev4", 00:25:49.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.669 "is_configured": false, 00:25:49.669 "data_offset": 0, 00:25:49.669 "data_size": 0 00:25:49.669 } 00:25:49.669 ] 00:25:49.669 }' 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.669 07:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.237 [2024-11-20 07:23:14.283869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:50.237 [2024-11-20 07:23:14.283994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.237 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.238 [2024-11-20 07:23:14.291857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:50.238 [2024-11-20 07:23:14.294941] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:50.238 [2024-11-20 07:23:14.295743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:50.238 [2024-11-20 07:23:14.295903] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:50.238 [2024-11-20 07:23:14.295985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:50.238 [2024-11-20 07:23:14.296261] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:50.238 [2024-11-20 07:23:14.296441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.238 "name": "Existed_Raid", 00:25:50.238 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:50.238 "strip_size_kb": 0, 00:25:50.238 "state": "configuring", 00:25:50.238 "raid_level": "raid1", 00:25:50.238 "superblock": true, 00:25:50.238 "num_base_bdevs": 4, 00:25:50.238 "num_base_bdevs_discovered": 1, 00:25:50.238 "num_base_bdevs_operational": 4, 00:25:50.238 "base_bdevs_list": [ 00:25:50.238 { 00:25:50.238 "name": "BaseBdev1", 00:25:50.238 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:50.238 "is_configured": true, 00:25:50.238 "data_offset": 2048, 00:25:50.238 "data_size": 63488 00:25:50.238 }, 00:25:50.238 { 00:25:50.238 "name": "BaseBdev2", 00:25:50.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.238 "is_configured": false, 00:25:50.238 "data_offset": 0, 00:25:50.238 "data_size": 0 00:25:50.238 }, 00:25:50.238 { 00:25:50.238 "name": "BaseBdev3", 00:25:50.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.238 "is_configured": false, 00:25:50.238 "data_offset": 0, 00:25:50.238 "data_size": 0 00:25:50.238 }, 00:25:50.238 { 00:25:50.238 "name": "BaseBdev4", 00:25:50.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.238 "is_configured": false, 00:25:50.238 "data_offset": 0, 00:25:50.238 "data_size": 0 00:25:50.238 } 00:25:50.238 ] 00:25:50.238 }' 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.238 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.805 BaseBdev2 00:25:50.805 [2024-11-20 07:23:14.884720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.805 [ 00:25:50.805 { 00:25:50.805 "name": "BaseBdev2", 00:25:50.805 "aliases": [ 00:25:50.805 "642fe095-919f-44bf-b1f6-f2ea021c78e5" 00:25:50.805 ], 00:25:50.805 "product_name": "Malloc disk", 00:25:50.805 "block_size": 512, 00:25:50.805 "num_blocks": 65536, 00:25:50.805 "uuid": "642fe095-919f-44bf-b1f6-f2ea021c78e5", 00:25:50.805 "assigned_rate_limits": { 00:25:50.805 "rw_ios_per_sec": 0, 00:25:50.805 "rw_mbytes_per_sec": 0, 00:25:50.805 "r_mbytes_per_sec": 0, 00:25:50.805 "w_mbytes_per_sec": 0 00:25:50.805 }, 00:25:50.805 "claimed": true, 00:25:50.805 "claim_type": "exclusive_write", 00:25:50.805 "zoned": false, 00:25:50.805 "supported_io_types": { 00:25:50.805 "read": true, 00:25:50.805 "write": true, 00:25:50.805 "unmap": true, 00:25:50.805 "flush": true, 00:25:50.805 "reset": true, 00:25:50.805 "nvme_admin": false, 00:25:50.805 "nvme_io": false, 00:25:50.805 "nvme_io_md": false, 00:25:50.805 "write_zeroes": true, 00:25:50.805 "zcopy": true, 00:25:50.805 "get_zone_info": false, 00:25:50.805 "zone_management": false, 00:25:50.805 "zone_append": false, 00:25:50.805 "compare": false, 00:25:50.805 "compare_and_write": false, 00:25:50.805 "abort": true, 00:25:50.805 "seek_hole": false, 00:25:50.805 "seek_data": false, 00:25:50.805 "copy": true, 00:25:50.805 "nvme_iov_md": false 00:25:50.805 }, 00:25:50.805 "memory_domains": [ 00:25:50.805 { 00:25:50.805 "dma_device_id": "system", 00:25:50.805 "dma_device_type": 1 00:25:50.805 }, 00:25:50.805 { 00:25:50.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.805 "dma_device_type": 2 00:25:50.805 } 00:25:50.805 ], 00:25:50.805 "driver_specific": {} 00:25:50.805 } 00:25:50.805 ] 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.805 "name": "Existed_Raid", 00:25:50.805 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:50.805 "strip_size_kb": 0, 00:25:50.805 "state": "configuring", 00:25:50.805 "raid_level": "raid1", 00:25:50.805 "superblock": true, 00:25:50.805 "num_base_bdevs": 4, 00:25:50.805 "num_base_bdevs_discovered": 2, 00:25:50.805 "num_base_bdevs_operational": 4, 00:25:50.805 "base_bdevs_list": [ 00:25:50.805 { 00:25:50.805 "name": "BaseBdev1", 00:25:50.805 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:50.805 "is_configured": true, 00:25:50.805 "data_offset": 2048, 00:25:50.805 "data_size": 63488 00:25:50.805 }, 00:25:50.805 { 00:25:50.805 "name": "BaseBdev2", 00:25:50.805 "uuid": "642fe095-919f-44bf-b1f6-f2ea021c78e5", 00:25:50.805 "is_configured": true, 00:25:50.805 "data_offset": 2048, 00:25:50.805 "data_size": 63488 00:25:50.805 }, 00:25:50.805 { 00:25:50.805 "name": "BaseBdev3", 00:25:50.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.805 "is_configured": false, 00:25:50.805 "data_offset": 0, 00:25:50.805 "data_size": 0 00:25:50.805 }, 00:25:50.805 { 00:25:50.805 "name": "BaseBdev4", 00:25:50.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.805 "is_configured": false, 00:25:50.805 "data_offset": 0, 00:25:50.805 "data_size": 0 00:25:50.805 } 00:25:50.805 ] 00:25:50.805 }' 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.805 07:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.376 [2024-11-20 07:23:15.508551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:51.376 BaseBdev3 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.376 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.376 [ 00:25:51.376 { 00:25:51.376 "name": "BaseBdev3", 00:25:51.376 "aliases": [ 00:25:51.376 "c90f5fcf-eee9-428c-a789-140916a2ee33" 00:25:51.376 ], 00:25:51.376 "product_name": "Malloc disk", 00:25:51.376 "block_size": 512, 00:25:51.376 "num_blocks": 65536, 00:25:51.376 "uuid": "c90f5fcf-eee9-428c-a789-140916a2ee33", 00:25:51.376 "assigned_rate_limits": { 00:25:51.376 "rw_ios_per_sec": 0, 00:25:51.376 "rw_mbytes_per_sec": 0, 00:25:51.376 "r_mbytes_per_sec": 0, 00:25:51.376 "w_mbytes_per_sec": 0 00:25:51.376 }, 00:25:51.376 "claimed": true, 00:25:51.376 "claim_type": "exclusive_write", 00:25:51.376 "zoned": false, 00:25:51.376 "supported_io_types": { 00:25:51.376 "read": true, 00:25:51.376 "write": true, 00:25:51.376 "unmap": true, 00:25:51.376 "flush": true, 00:25:51.376 "reset": true, 00:25:51.376 "nvme_admin": false, 00:25:51.376 "nvme_io": false, 00:25:51.377 "nvme_io_md": false, 00:25:51.377 "write_zeroes": true, 00:25:51.377 "zcopy": true, 00:25:51.377 "get_zone_info": false, 00:25:51.377 "zone_management": false, 00:25:51.377 "zone_append": false, 00:25:51.377 "compare": false, 00:25:51.377 "compare_and_write": false, 00:25:51.377 "abort": true, 00:25:51.377 "seek_hole": false, 00:25:51.377 "seek_data": false, 00:25:51.377 "copy": true, 00:25:51.377 "nvme_iov_md": false 00:25:51.377 }, 00:25:51.377 "memory_domains": [ 00:25:51.377 { 00:25:51.377 "dma_device_id": "system", 00:25:51.377 "dma_device_type": 1 00:25:51.377 }, 00:25:51.377 { 00:25:51.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.377 "dma_device_type": 2 00:25:51.377 } 00:25:51.377 ], 00:25:51.377 "driver_specific": {} 00:25:51.377 } 00:25:51.377 ] 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.377 "name": "Existed_Raid", 00:25:51.377 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:51.377 "strip_size_kb": 0, 00:25:51.377 "state": "configuring", 00:25:51.377 "raid_level": "raid1", 00:25:51.377 "superblock": true, 00:25:51.377 "num_base_bdevs": 4, 00:25:51.377 "num_base_bdevs_discovered": 3, 00:25:51.377 "num_base_bdevs_operational": 4, 00:25:51.377 "base_bdevs_list": [ 00:25:51.377 { 00:25:51.377 "name": "BaseBdev1", 00:25:51.377 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:51.377 "is_configured": true, 00:25:51.377 "data_offset": 2048, 00:25:51.377 "data_size": 63488 00:25:51.377 }, 00:25:51.377 { 00:25:51.377 "name": "BaseBdev2", 00:25:51.377 "uuid": "642fe095-919f-44bf-b1f6-f2ea021c78e5", 00:25:51.377 "is_configured": true, 00:25:51.377 "data_offset": 2048, 00:25:51.377 "data_size": 63488 00:25:51.377 }, 00:25:51.377 { 00:25:51.377 "name": "BaseBdev3", 00:25:51.377 "uuid": "c90f5fcf-eee9-428c-a789-140916a2ee33", 00:25:51.377 "is_configured": true, 00:25:51.377 "data_offset": 2048, 00:25:51.377 "data_size": 63488 00:25:51.377 }, 00:25:51.377 { 00:25:51.377 "name": "BaseBdev4", 00:25:51.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.377 "is_configured": false, 00:25:51.377 "data_offset": 0, 00:25:51.377 "data_size": 0 00:25:51.377 } 00:25:51.377 ] 00:25:51.377 }' 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.377 07:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 [2024-11-20 07:23:16.108358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:51.945 [2024-11-20 07:23:16.108832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:51.945 [2024-11-20 07:23:16.108856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:51.945 BaseBdev4 00:25:51.945 [2024-11-20 07:23:16.109245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:51.945 [2024-11-20 07:23:16.109503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:51.945 [2024-11-20 07:23:16.109531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:51.945 [2024-11-20 07:23:16.109786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 [ 00:25:51.945 { 00:25:51.945 "name": "BaseBdev4", 00:25:51.945 "aliases": [ 00:25:51.945 "06147fa8-028e-47ed-9922-389223ba6f85" 00:25:51.945 ], 00:25:51.945 "product_name": "Malloc disk", 00:25:51.945 "block_size": 512, 00:25:51.945 "num_blocks": 65536, 00:25:51.945 "uuid": "06147fa8-028e-47ed-9922-389223ba6f85", 00:25:51.945 "assigned_rate_limits": { 00:25:51.945 "rw_ios_per_sec": 0, 00:25:51.945 "rw_mbytes_per_sec": 0, 00:25:51.945 "r_mbytes_per_sec": 0, 00:25:51.945 "w_mbytes_per_sec": 0 00:25:51.945 }, 00:25:51.945 "claimed": true, 00:25:51.945 "claim_type": "exclusive_write", 00:25:51.945 "zoned": false, 00:25:51.945 "supported_io_types": { 00:25:51.945 "read": true, 00:25:51.945 "write": true, 00:25:51.945 "unmap": true, 00:25:51.945 "flush": true, 00:25:51.945 "reset": true, 00:25:51.945 "nvme_admin": false, 00:25:51.945 "nvme_io": false, 00:25:51.945 "nvme_io_md": false, 00:25:51.945 "write_zeroes": true, 00:25:51.945 "zcopy": true, 00:25:51.945 "get_zone_info": false, 00:25:51.945 "zone_management": false, 00:25:51.945 "zone_append": false, 00:25:51.945 "compare": false, 00:25:51.945 "compare_and_write": false, 00:25:51.945 "abort": true, 00:25:51.945 "seek_hole": false, 00:25:51.945 "seek_data": false, 00:25:51.945 "copy": true, 00:25:51.945 "nvme_iov_md": false 00:25:51.945 }, 00:25:51.945 "memory_domains": [ 00:25:51.945 { 00:25:51.945 "dma_device_id": "system", 00:25:51.945 "dma_device_type": 1 00:25:51.945 }, 00:25:51.945 { 00:25:51.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.945 "dma_device_type": 2 00:25:51.945 } 00:25:51.945 ], 00:25:51.945 "driver_specific": {} 00:25:51.945 } 00:25:51.945 ] 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.945 "name": "Existed_Raid", 00:25:51.945 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:51.945 "strip_size_kb": 0, 00:25:51.945 "state": "online", 00:25:51.945 "raid_level": "raid1", 00:25:51.945 "superblock": true, 00:25:51.945 "num_base_bdevs": 4, 00:25:51.945 "num_base_bdevs_discovered": 4, 00:25:51.945 "num_base_bdevs_operational": 4, 00:25:51.945 "base_bdevs_list": [ 00:25:51.945 { 00:25:51.945 "name": "BaseBdev1", 00:25:51.945 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:51.945 "is_configured": true, 00:25:51.945 "data_offset": 2048, 00:25:51.945 "data_size": 63488 00:25:51.945 }, 00:25:51.945 { 00:25:51.945 "name": "BaseBdev2", 00:25:51.945 "uuid": "642fe095-919f-44bf-b1f6-f2ea021c78e5", 00:25:51.945 "is_configured": true, 00:25:51.945 "data_offset": 2048, 00:25:51.945 "data_size": 63488 00:25:51.945 }, 00:25:51.945 { 00:25:51.945 "name": "BaseBdev3", 00:25:51.945 "uuid": "c90f5fcf-eee9-428c-a789-140916a2ee33", 00:25:51.945 "is_configured": true, 00:25:51.945 "data_offset": 2048, 00:25:51.945 "data_size": 63488 00:25:51.945 }, 00:25:51.945 { 00:25:51.945 "name": "BaseBdev4", 00:25:51.945 "uuid": "06147fa8-028e-47ed-9922-389223ba6f85", 00:25:51.945 "is_configured": true, 00:25:51.945 "data_offset": 2048, 00:25:51.945 "data_size": 63488 00:25:51.945 } 00:25:51.945 ] 00:25:51.945 }' 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.945 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.514 [2024-11-20 07:23:16.685136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.514 "name": "Existed_Raid", 00:25:52.514 "aliases": [ 00:25:52.514 "d4550adb-4b04-479c-a9f3-99c290633abe" 00:25:52.514 ], 00:25:52.514 "product_name": "Raid Volume", 00:25:52.514 "block_size": 512, 00:25:52.514 "num_blocks": 63488, 00:25:52.514 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:52.514 "assigned_rate_limits": { 00:25:52.514 "rw_ios_per_sec": 0, 00:25:52.514 "rw_mbytes_per_sec": 0, 00:25:52.514 "r_mbytes_per_sec": 0, 00:25:52.514 "w_mbytes_per_sec": 0 00:25:52.514 }, 00:25:52.514 "claimed": false, 00:25:52.514 "zoned": false, 00:25:52.514 "supported_io_types": { 00:25:52.514 "read": true, 00:25:52.514 "write": true, 00:25:52.514 "unmap": false, 00:25:52.514 "flush": false, 00:25:52.514 "reset": true, 00:25:52.514 "nvme_admin": false, 00:25:52.514 "nvme_io": false, 00:25:52.514 "nvme_io_md": false, 00:25:52.514 "write_zeroes": true, 00:25:52.514 "zcopy": false, 00:25:52.514 "get_zone_info": false, 00:25:52.514 "zone_management": false, 00:25:52.514 "zone_append": false, 00:25:52.514 "compare": false, 00:25:52.514 "compare_and_write": false, 00:25:52.514 "abort": false, 00:25:52.514 "seek_hole": false, 00:25:52.514 "seek_data": false, 00:25:52.514 "copy": false, 00:25:52.514 "nvme_iov_md": false 00:25:52.514 }, 00:25:52.514 "memory_domains": [ 00:25:52.514 { 00:25:52.514 "dma_device_id": "system", 00:25:52.514 "dma_device_type": 1 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.514 "dma_device_type": 2 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "system", 00:25:52.514 "dma_device_type": 1 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.514 "dma_device_type": 2 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "system", 00:25:52.514 "dma_device_type": 1 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.514 "dma_device_type": 2 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "system", 00:25:52.514 "dma_device_type": 1 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.514 "dma_device_type": 2 00:25:52.514 } 00:25:52.514 ], 00:25:52.514 "driver_specific": { 00:25:52.514 "raid": { 00:25:52.514 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:52.514 "strip_size_kb": 0, 00:25:52.514 "state": "online", 00:25:52.514 "raid_level": "raid1", 00:25:52.514 "superblock": true, 00:25:52.514 "num_base_bdevs": 4, 00:25:52.514 "num_base_bdevs_discovered": 4, 00:25:52.514 "num_base_bdevs_operational": 4, 00:25:52.514 "base_bdevs_list": [ 00:25:52.514 { 00:25:52.514 "name": "BaseBdev1", 00:25:52.514 "uuid": "9282e7a1-0e0d-45ac-bba6-e3c1eac531c9", 00:25:52.514 "is_configured": true, 00:25:52.514 "data_offset": 2048, 00:25:52.514 "data_size": 63488 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "name": "BaseBdev2", 00:25:52.514 "uuid": "642fe095-919f-44bf-b1f6-f2ea021c78e5", 00:25:52.514 "is_configured": true, 00:25:52.514 "data_offset": 2048, 00:25:52.514 "data_size": 63488 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "name": "BaseBdev3", 00:25:52.514 "uuid": "c90f5fcf-eee9-428c-a789-140916a2ee33", 00:25:52.514 "is_configured": true, 00:25:52.514 "data_offset": 2048, 00:25:52.514 "data_size": 63488 00:25:52.514 }, 00:25:52.514 { 00:25:52.514 "name": "BaseBdev4", 00:25:52.514 "uuid": "06147fa8-028e-47ed-9922-389223ba6f85", 00:25:52.514 "is_configured": true, 00:25:52.514 "data_offset": 2048, 00:25:52.514 "data_size": 63488 00:25:52.514 } 00:25:52.514 ] 00:25:52.514 } 00:25:52.514 } 00:25:52.514 }' 00:25:52.514 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:52.515 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:52.515 BaseBdev2 00:25:52.515 BaseBdev3 00:25:52.515 BaseBdev4' 00:25:52.515 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.774 07:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:52.774 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.774 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:52.774 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:52.774 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:52.774 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.774 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.774 [2024-11-20 07:23:17.036809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.033 "name": "Existed_Raid", 00:25:53.033 "uuid": "d4550adb-4b04-479c-a9f3-99c290633abe", 00:25:53.033 "strip_size_kb": 0, 00:25:53.033 "state": "online", 00:25:53.033 "raid_level": "raid1", 00:25:53.033 "superblock": true, 00:25:53.033 "num_base_bdevs": 4, 00:25:53.033 "num_base_bdevs_discovered": 3, 00:25:53.033 "num_base_bdevs_operational": 3, 00:25:53.033 "base_bdevs_list": [ 00:25:53.033 { 00:25:53.033 "name": null, 00:25:53.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.033 "is_configured": false, 00:25:53.033 "data_offset": 0, 00:25:53.033 "data_size": 63488 00:25:53.033 }, 00:25:53.033 { 00:25:53.033 "name": "BaseBdev2", 00:25:53.033 "uuid": "642fe095-919f-44bf-b1f6-f2ea021c78e5", 00:25:53.033 "is_configured": true, 00:25:53.033 "data_offset": 2048, 00:25:53.033 "data_size": 63488 00:25:53.033 }, 00:25:53.033 { 00:25:53.033 "name": "BaseBdev3", 00:25:53.033 "uuid": "c90f5fcf-eee9-428c-a789-140916a2ee33", 00:25:53.033 "is_configured": true, 00:25:53.033 "data_offset": 2048, 00:25:53.033 "data_size": 63488 00:25:53.033 }, 00:25:53.033 { 00:25:53.033 "name": "BaseBdev4", 00:25:53.033 "uuid": "06147fa8-028e-47ed-9922-389223ba6f85", 00:25:53.033 "is_configured": true, 00:25:53.033 "data_offset": 2048, 00:25:53.033 "data_size": 63488 00:25:53.033 } 00:25:53.033 ] 00:25:53.033 }' 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.033 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.600 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.600 [2024-11-20 07:23:17.701211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.601 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.601 [2024-11-20 07:23:17.848753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.860 07:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.860 [2024-11-20 07:23:17.996164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:53.860 [2024-11-20 07:23:17.996468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.860 [2024-11-20 07:23:18.084855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.860 [2024-11-20 07:23:18.085178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.860 [2024-11-20 07:23:18.085216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.860 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.119 BaseBdev2 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.119 [ 00:25:54.119 { 00:25:54.119 "name": "BaseBdev2", 00:25:54.119 "aliases": [ 00:25:54.119 "56cb3d08-2b1b-4b12-947f-253e3a5c5de7" 00:25:54.119 ], 00:25:54.119 "product_name": "Malloc disk", 00:25:54.119 "block_size": 512, 00:25:54.119 "num_blocks": 65536, 00:25:54.119 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:54.119 "assigned_rate_limits": { 00:25:54.119 "rw_ios_per_sec": 0, 00:25:54.119 "rw_mbytes_per_sec": 0, 00:25:54.119 "r_mbytes_per_sec": 0, 00:25:54.119 "w_mbytes_per_sec": 0 00:25:54.119 }, 00:25:54.119 "claimed": false, 00:25:54.119 "zoned": false, 00:25:54.119 "supported_io_types": { 00:25:54.119 "read": true, 00:25:54.119 "write": true, 00:25:54.119 "unmap": true, 00:25:54.119 "flush": true, 00:25:54.119 "reset": true, 00:25:54.119 "nvme_admin": false, 00:25:54.119 "nvme_io": false, 00:25:54.119 "nvme_io_md": false, 00:25:54.119 "write_zeroes": true, 00:25:54.119 "zcopy": true, 00:25:54.119 "get_zone_info": false, 00:25:54.119 "zone_management": false, 00:25:54.119 "zone_append": false, 00:25:54.119 "compare": false, 00:25:54.119 "compare_and_write": false, 00:25:54.119 "abort": true, 00:25:54.119 "seek_hole": false, 00:25:54.119 "seek_data": false, 00:25:54.119 "copy": true, 00:25:54.119 "nvme_iov_md": false 00:25:54.119 }, 00:25:54.119 "memory_domains": [ 00:25:54.119 { 00:25:54.119 "dma_device_id": "system", 00:25:54.119 "dma_device_type": 1 00:25:54.119 }, 00:25:54.119 { 00:25:54.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.119 "dma_device_type": 2 00:25:54.119 } 00:25:54.119 ], 00:25:54.119 "driver_specific": {} 00:25:54.119 } 00:25:54.119 ] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.119 BaseBdev3 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.119 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.119 [ 00:25:54.119 { 00:25:54.119 "name": "BaseBdev3", 00:25:54.119 "aliases": [ 00:25:54.119 "f5ac2692-ddff-48f7-8473-b093286ccd83" 00:25:54.119 ], 00:25:54.119 "product_name": "Malloc disk", 00:25:54.119 "block_size": 512, 00:25:54.119 "num_blocks": 65536, 00:25:54.119 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:54.119 "assigned_rate_limits": { 00:25:54.119 "rw_ios_per_sec": 0, 00:25:54.119 "rw_mbytes_per_sec": 0, 00:25:54.119 "r_mbytes_per_sec": 0, 00:25:54.119 "w_mbytes_per_sec": 0 00:25:54.119 }, 00:25:54.119 "claimed": false, 00:25:54.119 "zoned": false, 00:25:54.119 "supported_io_types": { 00:25:54.119 "read": true, 00:25:54.119 "write": true, 00:25:54.119 "unmap": true, 00:25:54.119 "flush": true, 00:25:54.119 "reset": true, 00:25:54.119 "nvme_admin": false, 00:25:54.119 "nvme_io": false, 00:25:54.119 "nvme_io_md": false, 00:25:54.120 "write_zeroes": true, 00:25:54.120 "zcopy": true, 00:25:54.120 "get_zone_info": false, 00:25:54.120 "zone_management": false, 00:25:54.120 "zone_append": false, 00:25:54.120 "compare": false, 00:25:54.120 "compare_and_write": false, 00:25:54.120 "abort": true, 00:25:54.120 "seek_hole": false, 00:25:54.120 "seek_data": false, 00:25:54.120 "copy": true, 00:25:54.120 "nvme_iov_md": false 00:25:54.120 }, 00:25:54.120 "memory_domains": [ 00:25:54.120 { 00:25:54.120 "dma_device_id": "system", 00:25:54.120 "dma_device_type": 1 00:25:54.120 }, 00:25:54.120 { 00:25:54.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.120 "dma_device_type": 2 00:25:54.120 } 00:25:54.120 ], 00:25:54.120 "driver_specific": {} 00:25:54.120 } 00:25:54.120 ] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.120 BaseBdev4 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.120 [ 00:25:54.120 { 00:25:54.120 "name": "BaseBdev4", 00:25:54.120 "aliases": [ 00:25:54.120 "9ae1980c-f41f-40db-a016-3f668d0799a5" 00:25:54.120 ], 00:25:54.120 "product_name": "Malloc disk", 00:25:54.120 "block_size": 512, 00:25:54.120 "num_blocks": 65536, 00:25:54.120 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:54.120 "assigned_rate_limits": { 00:25:54.120 "rw_ios_per_sec": 0, 00:25:54.120 "rw_mbytes_per_sec": 0, 00:25:54.120 "r_mbytes_per_sec": 0, 00:25:54.120 "w_mbytes_per_sec": 0 00:25:54.120 }, 00:25:54.120 "claimed": false, 00:25:54.120 "zoned": false, 00:25:54.120 "supported_io_types": { 00:25:54.120 "read": true, 00:25:54.120 "write": true, 00:25:54.120 "unmap": true, 00:25:54.120 "flush": true, 00:25:54.120 "reset": true, 00:25:54.120 "nvme_admin": false, 00:25:54.120 "nvme_io": false, 00:25:54.120 "nvme_io_md": false, 00:25:54.120 "write_zeroes": true, 00:25:54.120 "zcopy": true, 00:25:54.120 "get_zone_info": false, 00:25:54.120 "zone_management": false, 00:25:54.120 "zone_append": false, 00:25:54.120 "compare": false, 00:25:54.120 "compare_and_write": false, 00:25:54.120 "abort": true, 00:25:54.120 "seek_hole": false, 00:25:54.120 "seek_data": false, 00:25:54.120 "copy": true, 00:25:54.120 "nvme_iov_md": false 00:25:54.120 }, 00:25:54.120 "memory_domains": [ 00:25:54.120 { 00:25:54.120 "dma_device_id": "system", 00:25:54.120 "dma_device_type": 1 00:25:54.120 }, 00:25:54.120 { 00:25:54.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.120 "dma_device_type": 2 00:25:54.120 } 00:25:54.120 ], 00:25:54.120 "driver_specific": {} 00:25:54.120 } 00:25:54.120 ] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.120 [2024-11-20 07:23:18.372308] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:54.120 [2024-11-20 07:23:18.373034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:54.120 [2024-11-20 07:23:18.373086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:54.120 [2024-11-20 07:23:18.375705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:54.120 [2024-11-20 07:23:18.375784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.120 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.379 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.379 "name": "Existed_Raid", 00:25:54.379 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:54.379 "strip_size_kb": 0, 00:25:54.379 "state": "configuring", 00:25:54.379 "raid_level": "raid1", 00:25:54.379 "superblock": true, 00:25:54.379 "num_base_bdevs": 4, 00:25:54.379 "num_base_bdevs_discovered": 3, 00:25:54.379 "num_base_bdevs_operational": 4, 00:25:54.379 "base_bdevs_list": [ 00:25:54.379 { 00:25:54.379 "name": "BaseBdev1", 00:25:54.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.379 "is_configured": false, 00:25:54.379 "data_offset": 0, 00:25:54.379 "data_size": 0 00:25:54.379 }, 00:25:54.379 { 00:25:54.379 "name": "BaseBdev2", 00:25:54.379 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:54.379 "is_configured": true, 00:25:54.379 "data_offset": 2048, 00:25:54.379 "data_size": 63488 00:25:54.379 }, 00:25:54.379 { 00:25:54.379 "name": "BaseBdev3", 00:25:54.379 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:54.379 "is_configured": true, 00:25:54.379 "data_offset": 2048, 00:25:54.379 "data_size": 63488 00:25:54.379 }, 00:25:54.379 { 00:25:54.379 "name": "BaseBdev4", 00:25:54.379 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:54.379 "is_configured": true, 00:25:54.379 "data_offset": 2048, 00:25:54.379 "data_size": 63488 00:25:54.379 } 00:25:54.379 ] 00:25:54.379 }' 00:25:54.379 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.379 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.638 [2024-11-20 07:23:18.900445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.638 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.896 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.896 "name": "Existed_Raid", 00:25:54.896 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:54.896 "strip_size_kb": 0, 00:25:54.896 "state": "configuring", 00:25:54.896 "raid_level": "raid1", 00:25:54.896 "superblock": true, 00:25:54.896 "num_base_bdevs": 4, 00:25:54.896 "num_base_bdevs_discovered": 2, 00:25:54.896 "num_base_bdevs_operational": 4, 00:25:54.896 "base_bdevs_list": [ 00:25:54.896 { 00:25:54.896 "name": "BaseBdev1", 00:25:54.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.896 "is_configured": false, 00:25:54.896 "data_offset": 0, 00:25:54.896 "data_size": 0 00:25:54.896 }, 00:25:54.896 { 00:25:54.896 "name": null, 00:25:54.896 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:54.896 "is_configured": false, 00:25:54.896 "data_offset": 0, 00:25:54.896 "data_size": 63488 00:25:54.896 }, 00:25:54.896 { 00:25:54.896 "name": "BaseBdev3", 00:25:54.896 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:54.896 "is_configured": true, 00:25:54.896 "data_offset": 2048, 00:25:54.896 "data_size": 63488 00:25:54.896 }, 00:25:54.896 { 00:25:54.896 "name": "BaseBdev4", 00:25:54.896 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:54.896 "is_configured": true, 00:25:54.896 "data_offset": 2048, 00:25:54.896 "data_size": 63488 00:25:54.896 } 00:25:54.896 ] 00:25:54.896 }' 00:25:54.896 07:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.896 07:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.153 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:55.153 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.153 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.153 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.410 [2024-11-20 07:23:19.511363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.410 BaseBdev1 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.410 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.410 [ 00:25:55.410 { 00:25:55.410 "name": "BaseBdev1", 00:25:55.410 "aliases": [ 00:25:55.410 "93408ed2-61b5-4c69-b36b-e6e1bf395bfd" 00:25:55.410 ], 00:25:55.410 "product_name": "Malloc disk", 00:25:55.410 "block_size": 512, 00:25:55.410 "num_blocks": 65536, 00:25:55.410 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:55.410 "assigned_rate_limits": { 00:25:55.410 "rw_ios_per_sec": 0, 00:25:55.410 "rw_mbytes_per_sec": 0, 00:25:55.410 "r_mbytes_per_sec": 0, 00:25:55.410 "w_mbytes_per_sec": 0 00:25:55.410 }, 00:25:55.410 "claimed": true, 00:25:55.410 "claim_type": "exclusive_write", 00:25:55.410 "zoned": false, 00:25:55.410 "supported_io_types": { 00:25:55.410 "read": true, 00:25:55.410 "write": true, 00:25:55.410 "unmap": true, 00:25:55.410 "flush": true, 00:25:55.410 "reset": true, 00:25:55.410 "nvme_admin": false, 00:25:55.410 "nvme_io": false, 00:25:55.410 "nvme_io_md": false, 00:25:55.410 "write_zeroes": true, 00:25:55.410 "zcopy": true, 00:25:55.410 "get_zone_info": false, 00:25:55.410 "zone_management": false, 00:25:55.411 "zone_append": false, 00:25:55.411 "compare": false, 00:25:55.411 "compare_and_write": false, 00:25:55.411 "abort": true, 00:25:55.411 "seek_hole": false, 00:25:55.411 "seek_data": false, 00:25:55.411 "copy": true, 00:25:55.411 "nvme_iov_md": false 00:25:55.411 }, 00:25:55.411 "memory_domains": [ 00:25:55.411 { 00:25:55.411 "dma_device_id": "system", 00:25:55.411 "dma_device_type": 1 00:25:55.411 }, 00:25:55.411 { 00:25:55.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.411 "dma_device_type": 2 00:25:55.411 } 00:25:55.411 ], 00:25:55.411 "driver_specific": {} 00:25:55.411 } 00:25:55.411 ] 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.411 "name": "Existed_Raid", 00:25:55.411 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:55.411 "strip_size_kb": 0, 00:25:55.411 "state": "configuring", 00:25:55.411 "raid_level": "raid1", 00:25:55.411 "superblock": true, 00:25:55.411 "num_base_bdevs": 4, 00:25:55.411 "num_base_bdevs_discovered": 3, 00:25:55.411 "num_base_bdevs_operational": 4, 00:25:55.411 "base_bdevs_list": [ 00:25:55.411 { 00:25:55.411 "name": "BaseBdev1", 00:25:55.411 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:55.411 "is_configured": true, 00:25:55.411 "data_offset": 2048, 00:25:55.411 "data_size": 63488 00:25:55.411 }, 00:25:55.411 { 00:25:55.411 "name": null, 00:25:55.411 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:55.411 "is_configured": false, 00:25:55.411 "data_offset": 0, 00:25:55.411 "data_size": 63488 00:25:55.411 }, 00:25:55.411 { 00:25:55.411 "name": "BaseBdev3", 00:25:55.411 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:55.411 "is_configured": true, 00:25:55.411 "data_offset": 2048, 00:25:55.411 "data_size": 63488 00:25:55.411 }, 00:25:55.411 { 00:25:55.411 "name": "BaseBdev4", 00:25:55.411 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:55.411 "is_configured": true, 00:25:55.411 "data_offset": 2048, 00:25:55.411 "data_size": 63488 00:25:55.411 } 00:25:55.411 ] 00:25:55.411 }' 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.411 07:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.979 [2024-11-20 07:23:20.087645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.979 "name": "Existed_Raid", 00:25:55.979 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:55.979 "strip_size_kb": 0, 00:25:55.979 "state": "configuring", 00:25:55.979 "raid_level": "raid1", 00:25:55.979 "superblock": true, 00:25:55.979 "num_base_bdevs": 4, 00:25:55.979 "num_base_bdevs_discovered": 2, 00:25:55.979 "num_base_bdevs_operational": 4, 00:25:55.979 "base_bdevs_list": [ 00:25:55.979 { 00:25:55.979 "name": "BaseBdev1", 00:25:55.979 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:55.979 "is_configured": true, 00:25:55.979 "data_offset": 2048, 00:25:55.979 "data_size": 63488 00:25:55.979 }, 00:25:55.979 { 00:25:55.979 "name": null, 00:25:55.979 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:55.979 "is_configured": false, 00:25:55.979 "data_offset": 0, 00:25:55.979 "data_size": 63488 00:25:55.979 }, 00:25:55.979 { 00:25:55.979 "name": null, 00:25:55.979 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:55.979 "is_configured": false, 00:25:55.979 "data_offset": 0, 00:25:55.979 "data_size": 63488 00:25:55.979 }, 00:25:55.979 { 00:25:55.979 "name": "BaseBdev4", 00:25:55.979 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:55.979 "is_configured": true, 00:25:55.979 "data_offset": 2048, 00:25:55.979 "data_size": 63488 00:25:55.979 } 00:25:55.979 ] 00:25:55.979 }' 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.979 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.546 [2024-11-20 07:23:20.679803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.546 "name": "Existed_Raid", 00:25:56.546 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:56.546 "strip_size_kb": 0, 00:25:56.546 "state": "configuring", 00:25:56.546 "raid_level": "raid1", 00:25:56.546 "superblock": true, 00:25:56.546 "num_base_bdevs": 4, 00:25:56.546 "num_base_bdevs_discovered": 3, 00:25:56.546 "num_base_bdevs_operational": 4, 00:25:56.546 "base_bdevs_list": [ 00:25:56.546 { 00:25:56.546 "name": "BaseBdev1", 00:25:56.546 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:56.546 "is_configured": true, 00:25:56.546 "data_offset": 2048, 00:25:56.546 "data_size": 63488 00:25:56.546 }, 00:25:56.546 { 00:25:56.546 "name": null, 00:25:56.546 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:56.546 "is_configured": false, 00:25:56.546 "data_offset": 0, 00:25:56.546 "data_size": 63488 00:25:56.546 }, 00:25:56.546 { 00:25:56.546 "name": "BaseBdev3", 00:25:56.546 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:56.546 "is_configured": true, 00:25:56.546 "data_offset": 2048, 00:25:56.546 "data_size": 63488 00:25:56.546 }, 00:25:56.546 { 00:25:56.546 "name": "BaseBdev4", 00:25:56.546 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:56.546 "is_configured": true, 00:25:56.546 "data_offset": 2048, 00:25:56.546 "data_size": 63488 00:25:56.546 } 00:25:56.546 ] 00:25:56.546 }' 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.546 07:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.112 [2024-11-20 07:23:21.244016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.112 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.112 "name": "Existed_Raid", 00:25:57.112 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:57.112 "strip_size_kb": 0, 00:25:57.112 "state": "configuring", 00:25:57.112 "raid_level": "raid1", 00:25:57.112 "superblock": true, 00:25:57.112 "num_base_bdevs": 4, 00:25:57.112 "num_base_bdevs_discovered": 2, 00:25:57.112 "num_base_bdevs_operational": 4, 00:25:57.112 "base_bdevs_list": [ 00:25:57.112 { 00:25:57.112 "name": null, 00:25:57.112 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:57.112 "is_configured": false, 00:25:57.112 "data_offset": 0, 00:25:57.112 "data_size": 63488 00:25:57.113 }, 00:25:57.113 { 00:25:57.113 "name": null, 00:25:57.113 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:57.113 "is_configured": false, 00:25:57.113 "data_offset": 0, 00:25:57.113 "data_size": 63488 00:25:57.113 }, 00:25:57.113 { 00:25:57.113 "name": "BaseBdev3", 00:25:57.113 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:57.113 "is_configured": true, 00:25:57.113 "data_offset": 2048, 00:25:57.113 "data_size": 63488 00:25:57.113 }, 00:25:57.113 { 00:25:57.113 "name": "BaseBdev4", 00:25:57.113 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:57.113 "is_configured": true, 00:25:57.113 "data_offset": 2048, 00:25:57.113 "data_size": 63488 00:25:57.113 } 00:25:57.113 ] 00:25:57.113 }' 00:25:57.113 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.113 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.679 [2024-11-20 07:23:21.936231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.679 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.680 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.680 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.680 07:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.680 07:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.938 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.938 "name": "Existed_Raid", 00:25:57.938 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:57.938 "strip_size_kb": 0, 00:25:57.938 "state": "configuring", 00:25:57.938 "raid_level": "raid1", 00:25:57.938 "superblock": true, 00:25:57.938 "num_base_bdevs": 4, 00:25:57.938 "num_base_bdevs_discovered": 3, 00:25:57.938 "num_base_bdevs_operational": 4, 00:25:57.938 "base_bdevs_list": [ 00:25:57.938 { 00:25:57.938 "name": null, 00:25:57.938 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:57.938 "is_configured": false, 00:25:57.938 "data_offset": 0, 00:25:57.938 "data_size": 63488 00:25:57.938 }, 00:25:57.938 { 00:25:57.938 "name": "BaseBdev2", 00:25:57.938 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:57.938 "is_configured": true, 00:25:57.938 "data_offset": 2048, 00:25:57.938 "data_size": 63488 00:25:57.938 }, 00:25:57.938 { 00:25:57.938 "name": "BaseBdev3", 00:25:57.938 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:57.938 "is_configured": true, 00:25:57.938 "data_offset": 2048, 00:25:57.938 "data_size": 63488 00:25:57.938 }, 00:25:57.938 { 00:25:57.938 "name": "BaseBdev4", 00:25:57.938 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:57.938 "is_configured": true, 00:25:57.938 "data_offset": 2048, 00:25:57.938 "data_size": 63488 00:25:57.938 } 00:25:57.938 ] 00:25:57.938 }' 00:25:57.938 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.938 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.198 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.198 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.198 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.198 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 93408ed2-61b5-4c69-b36b-e6e1bf395bfd 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.458 [2024-11-20 07:23:22.615019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:58.458 [2024-11-20 07:23:22.615576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:58.458 NewBaseBdev 00:25:58.458 [2024-11-20 07:23:22.615745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:58.458 [2024-11-20 07:23:22.616120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:58.458 [2024-11-20 07:23:22.616334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:58.458 [2024-11-20 07:23:22.616351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.458 [2024-11-20 07:23:22.616524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.458 [ 00:25:58.458 { 00:25:58.458 "name": "NewBaseBdev", 00:25:58.458 "aliases": [ 00:25:58.458 "93408ed2-61b5-4c69-b36b-e6e1bf395bfd" 00:25:58.458 ], 00:25:58.458 "product_name": "Malloc disk", 00:25:58.458 "block_size": 512, 00:25:58.458 "num_blocks": 65536, 00:25:58.458 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:58.458 "assigned_rate_limits": { 00:25:58.458 "rw_ios_per_sec": 0, 00:25:58.458 "rw_mbytes_per_sec": 0, 00:25:58.458 "r_mbytes_per_sec": 0, 00:25:58.458 "w_mbytes_per_sec": 0 00:25:58.458 }, 00:25:58.458 "claimed": true, 00:25:58.458 "claim_type": "exclusive_write", 00:25:58.458 "zoned": false, 00:25:58.458 "supported_io_types": { 00:25:58.458 "read": true, 00:25:58.458 "write": true, 00:25:58.458 "unmap": true, 00:25:58.458 "flush": true, 00:25:58.458 "reset": true, 00:25:58.458 "nvme_admin": false, 00:25:58.458 "nvme_io": false, 00:25:58.458 "nvme_io_md": false, 00:25:58.458 "write_zeroes": true, 00:25:58.458 "zcopy": true, 00:25:58.458 "get_zone_info": false, 00:25:58.458 "zone_management": false, 00:25:58.458 "zone_append": false, 00:25:58.458 "compare": false, 00:25:58.458 "compare_and_write": false, 00:25:58.458 "abort": true, 00:25:58.458 "seek_hole": false, 00:25:58.458 "seek_data": false, 00:25:58.458 "copy": true, 00:25:58.458 "nvme_iov_md": false 00:25:58.458 }, 00:25:58.458 "memory_domains": [ 00:25:58.458 { 00:25:58.458 "dma_device_id": "system", 00:25:58.458 "dma_device_type": 1 00:25:58.458 }, 00:25:58.458 { 00:25:58.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.458 "dma_device_type": 2 00:25:58.458 } 00:25:58.458 ], 00:25:58.458 "driver_specific": {} 00:25:58.458 } 00:25:58.458 ] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.458 "name": "Existed_Raid", 00:25:58.458 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:58.458 "strip_size_kb": 0, 00:25:58.458 "state": "online", 00:25:58.458 "raid_level": "raid1", 00:25:58.458 "superblock": true, 00:25:58.458 "num_base_bdevs": 4, 00:25:58.458 "num_base_bdevs_discovered": 4, 00:25:58.458 "num_base_bdevs_operational": 4, 00:25:58.458 "base_bdevs_list": [ 00:25:58.458 { 00:25:58.458 "name": "NewBaseBdev", 00:25:58.458 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:58.458 "is_configured": true, 00:25:58.458 "data_offset": 2048, 00:25:58.458 "data_size": 63488 00:25:58.458 }, 00:25:58.458 { 00:25:58.458 "name": "BaseBdev2", 00:25:58.458 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:58.458 "is_configured": true, 00:25:58.458 "data_offset": 2048, 00:25:58.458 "data_size": 63488 00:25:58.458 }, 00:25:58.458 { 00:25:58.458 "name": "BaseBdev3", 00:25:58.458 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:58.458 "is_configured": true, 00:25:58.458 "data_offset": 2048, 00:25:58.458 "data_size": 63488 00:25:58.458 }, 00:25:58.458 { 00:25:58.458 "name": "BaseBdev4", 00:25:58.458 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:58.458 "is_configured": true, 00:25:58.458 "data_offset": 2048, 00:25:58.458 "data_size": 63488 00:25:58.458 } 00:25:58.458 ] 00:25:58.458 }' 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.458 07:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.028 [2024-11-20 07:23:23.171719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:59.028 "name": "Existed_Raid", 00:25:59.028 "aliases": [ 00:25:59.028 "aba32ea1-de41-4adf-a4da-9e2fa978558b" 00:25:59.028 ], 00:25:59.028 "product_name": "Raid Volume", 00:25:59.028 "block_size": 512, 00:25:59.028 "num_blocks": 63488, 00:25:59.028 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:59.028 "assigned_rate_limits": { 00:25:59.028 "rw_ios_per_sec": 0, 00:25:59.028 "rw_mbytes_per_sec": 0, 00:25:59.028 "r_mbytes_per_sec": 0, 00:25:59.028 "w_mbytes_per_sec": 0 00:25:59.028 }, 00:25:59.028 "claimed": false, 00:25:59.028 "zoned": false, 00:25:59.028 "supported_io_types": { 00:25:59.028 "read": true, 00:25:59.028 "write": true, 00:25:59.028 "unmap": false, 00:25:59.028 "flush": false, 00:25:59.028 "reset": true, 00:25:59.028 "nvme_admin": false, 00:25:59.028 "nvme_io": false, 00:25:59.028 "nvme_io_md": false, 00:25:59.028 "write_zeroes": true, 00:25:59.028 "zcopy": false, 00:25:59.028 "get_zone_info": false, 00:25:59.028 "zone_management": false, 00:25:59.028 "zone_append": false, 00:25:59.028 "compare": false, 00:25:59.028 "compare_and_write": false, 00:25:59.028 "abort": false, 00:25:59.028 "seek_hole": false, 00:25:59.028 "seek_data": false, 00:25:59.028 "copy": false, 00:25:59.028 "nvme_iov_md": false 00:25:59.028 }, 00:25:59.028 "memory_domains": [ 00:25:59.028 { 00:25:59.028 "dma_device_id": "system", 00:25:59.028 "dma_device_type": 1 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.028 "dma_device_type": 2 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "system", 00:25:59.028 "dma_device_type": 1 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.028 "dma_device_type": 2 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "system", 00:25:59.028 "dma_device_type": 1 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.028 "dma_device_type": 2 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "system", 00:25:59.028 "dma_device_type": 1 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.028 "dma_device_type": 2 00:25:59.028 } 00:25:59.028 ], 00:25:59.028 "driver_specific": { 00:25:59.028 "raid": { 00:25:59.028 "uuid": "aba32ea1-de41-4adf-a4da-9e2fa978558b", 00:25:59.028 "strip_size_kb": 0, 00:25:59.028 "state": "online", 00:25:59.028 "raid_level": "raid1", 00:25:59.028 "superblock": true, 00:25:59.028 "num_base_bdevs": 4, 00:25:59.028 "num_base_bdevs_discovered": 4, 00:25:59.028 "num_base_bdevs_operational": 4, 00:25:59.028 "base_bdevs_list": [ 00:25:59.028 { 00:25:59.028 "name": "NewBaseBdev", 00:25:59.028 "uuid": "93408ed2-61b5-4c69-b36b-e6e1bf395bfd", 00:25:59.028 "is_configured": true, 00:25:59.028 "data_offset": 2048, 00:25:59.028 "data_size": 63488 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "name": "BaseBdev2", 00:25:59.028 "uuid": "56cb3d08-2b1b-4b12-947f-253e3a5c5de7", 00:25:59.028 "is_configured": true, 00:25:59.028 "data_offset": 2048, 00:25:59.028 "data_size": 63488 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "name": "BaseBdev3", 00:25:59.028 "uuid": "f5ac2692-ddff-48f7-8473-b093286ccd83", 00:25:59.028 "is_configured": true, 00:25:59.028 "data_offset": 2048, 00:25:59.028 "data_size": 63488 00:25:59.028 }, 00:25:59.028 { 00:25:59.028 "name": "BaseBdev4", 00:25:59.028 "uuid": "9ae1980c-f41f-40db-a016-3f668d0799a5", 00:25:59.028 "is_configured": true, 00:25:59.028 "data_offset": 2048, 00:25:59.028 "data_size": 63488 00:25:59.028 } 00:25:59.028 ] 00:25:59.028 } 00:25:59.028 } 00:25:59.028 }' 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:59.028 BaseBdev2 00:25:59.028 BaseBdev3 00:25:59.028 BaseBdev4' 00:25:59.028 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.287 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.287 [2024-11-20 07:23:23.563379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:59.288 [2024-11-20 07:23:23.563565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.288 [2024-11-20 07:23:23.563860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.288 [2024-11-20 07:23:23.564256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.288 [2024-11-20 07:23:23.564280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:59.288 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.288 07:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74213 00:25:59.288 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74213 ']' 00:25:59.288 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74213 00:25:59.288 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:59.288 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.546 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74213 00:25:59.546 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.546 killing process with pid 74213 00:25:59.546 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.546 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74213' 00:25:59.546 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74213 00:25:59.546 [2024-11-20 07:23:23.606613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.546 07:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74213 00:25:59.806 [2024-11-20 07:23:23.974696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.182 07:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:01.182 00:26:01.182 real 0m13.043s 00:26:01.182 user 0m21.505s 00:26:01.182 sys 0m1.913s 00:26:01.182 07:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.182 07:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 ************************************ 00:26:01.182 END TEST raid_state_function_test_sb 00:26:01.182 ************************************ 00:26:01.182 07:23:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:01.182 07:23:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:01.182 07:23:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.182 07:23:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 ************************************ 00:26:01.182 START TEST raid_superblock_test 00:26:01.182 ************************************ 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:26:01.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74897 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74897 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74897 ']' 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.182 07:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 [2024-11-20 07:23:25.214634] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:01.182 [2024-11-20 07:23:25.214829] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74897 ] 00:26:01.182 [2024-11-20 07:23:25.401297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.440 [2024-11-20 07:23:25.535832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.698 [2024-11-20 07:23:25.743376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.698 [2024-11-20 07:23:25.743434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:01.956 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:01.957 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:01.957 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:01.957 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:01.957 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:01.957 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.957 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.215 malloc1 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.215 [2024-11-20 07:23:26.255688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:02.215 [2024-11-20 07:23:26.255920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.215 [2024-11-20 07:23:26.255998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:02.215 [2024-11-20 07:23:26.256134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.215 [2024-11-20 07:23:26.259099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.215 [2024-11-20 07:23:26.259267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:02.215 pt1 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.215 malloc2 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.215 [2024-11-20 07:23:26.312413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:02.215 [2024-11-20 07:23:26.312652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.215 [2024-11-20 07:23:26.312699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:02.215 [2024-11-20 07:23:26.312715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.215 [2024-11-20 07:23:26.315697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.215 [2024-11-20 07:23:26.315745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:02.215 pt2 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.215 malloc3 00:26:02.215 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.216 [2024-11-20 07:23:26.383556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:02.216 [2024-11-20 07:23:26.383768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.216 [2024-11-20 07:23:26.383849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:02.216 [2024-11-20 07:23:26.383961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.216 [2024-11-20 07:23:26.386950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.216 [2024-11-20 07:23:26.387132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:02.216 pt3 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.216 malloc4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.216 [2024-11-20 07:23:26.440241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:02.216 [2024-11-20 07:23:26.440431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.216 [2024-11-20 07:23:26.440511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:02.216 [2024-11-20 07:23:26.440642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.216 [2024-11-20 07:23:26.443485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.216 [2024-11-20 07:23:26.443653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:02.216 pt4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.216 [2024-11-20 07:23:26.452427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:02.216 [2024-11-20 07:23:26.455014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:02.216 [2024-11-20 07:23:26.455232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:02.216 [2024-11-20 07:23:26.455349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:02.216 [2024-11-20 07:23:26.455682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:02.216 [2024-11-20 07:23:26.455810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:02.216 [2024-11-20 07:23:26.456240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:02.216 [2024-11-20 07:23:26.456484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:02.216 [2024-11-20 07:23:26.456511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:02.216 [2024-11-20 07:23:26.456789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.216 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.474 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.474 "name": "raid_bdev1", 00:26:02.474 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:02.474 "strip_size_kb": 0, 00:26:02.474 "state": "online", 00:26:02.474 "raid_level": "raid1", 00:26:02.474 "superblock": true, 00:26:02.474 "num_base_bdevs": 4, 00:26:02.474 "num_base_bdevs_discovered": 4, 00:26:02.474 "num_base_bdevs_operational": 4, 00:26:02.474 "base_bdevs_list": [ 00:26:02.474 { 00:26:02.474 "name": "pt1", 00:26:02.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:02.474 "is_configured": true, 00:26:02.474 "data_offset": 2048, 00:26:02.474 "data_size": 63488 00:26:02.474 }, 00:26:02.474 { 00:26:02.474 "name": "pt2", 00:26:02.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:02.474 "is_configured": true, 00:26:02.474 "data_offset": 2048, 00:26:02.474 "data_size": 63488 00:26:02.474 }, 00:26:02.474 { 00:26:02.474 "name": "pt3", 00:26:02.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:02.474 "is_configured": true, 00:26:02.474 "data_offset": 2048, 00:26:02.474 "data_size": 63488 00:26:02.474 }, 00:26:02.474 { 00:26:02.474 "name": "pt4", 00:26:02.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:02.474 "is_configured": true, 00:26:02.474 "data_offset": 2048, 00:26:02.474 "data_size": 63488 00:26:02.474 } 00:26:02.474 ] 00:26:02.474 }' 00:26:02.474 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.474 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.733 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.733 [2024-11-20 07:23:26.969307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:02.734 07:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.734 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.734 "name": "raid_bdev1", 00:26:02.734 "aliases": [ 00:26:02.734 "ab5c86b7-74fe-41ab-98c8-1c225e26fca0" 00:26:02.734 ], 00:26:02.734 "product_name": "Raid Volume", 00:26:02.734 "block_size": 512, 00:26:02.734 "num_blocks": 63488, 00:26:02.734 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:02.734 "assigned_rate_limits": { 00:26:02.734 "rw_ios_per_sec": 0, 00:26:02.734 "rw_mbytes_per_sec": 0, 00:26:02.734 "r_mbytes_per_sec": 0, 00:26:02.734 "w_mbytes_per_sec": 0 00:26:02.734 }, 00:26:02.734 "claimed": false, 00:26:02.734 "zoned": false, 00:26:02.734 "supported_io_types": { 00:26:02.734 "read": true, 00:26:02.734 "write": true, 00:26:02.734 "unmap": false, 00:26:02.734 "flush": false, 00:26:02.734 "reset": true, 00:26:02.734 "nvme_admin": false, 00:26:02.734 "nvme_io": false, 00:26:02.734 "nvme_io_md": false, 00:26:02.734 "write_zeroes": true, 00:26:02.734 "zcopy": false, 00:26:02.734 "get_zone_info": false, 00:26:02.734 "zone_management": false, 00:26:02.734 "zone_append": false, 00:26:02.734 "compare": false, 00:26:02.734 "compare_and_write": false, 00:26:02.734 "abort": false, 00:26:02.734 "seek_hole": false, 00:26:02.734 "seek_data": false, 00:26:02.734 "copy": false, 00:26:02.734 "nvme_iov_md": false 00:26:02.734 }, 00:26:02.734 "memory_domains": [ 00:26:02.734 { 00:26:02.734 "dma_device_id": "system", 00:26:02.734 "dma_device_type": 1 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.734 "dma_device_type": 2 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "system", 00:26:02.734 "dma_device_type": 1 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.734 "dma_device_type": 2 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "system", 00:26:02.734 "dma_device_type": 1 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.734 "dma_device_type": 2 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "system", 00:26:02.734 "dma_device_type": 1 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.734 "dma_device_type": 2 00:26:02.734 } 00:26:02.734 ], 00:26:02.734 "driver_specific": { 00:26:02.734 "raid": { 00:26:02.734 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:02.734 "strip_size_kb": 0, 00:26:02.734 "state": "online", 00:26:02.734 "raid_level": "raid1", 00:26:02.734 "superblock": true, 00:26:02.734 "num_base_bdevs": 4, 00:26:02.734 "num_base_bdevs_discovered": 4, 00:26:02.734 "num_base_bdevs_operational": 4, 00:26:02.734 "base_bdevs_list": [ 00:26:02.734 { 00:26:02.734 "name": "pt1", 00:26:02.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:02.734 "is_configured": true, 00:26:02.734 "data_offset": 2048, 00:26:02.734 "data_size": 63488 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "name": "pt2", 00:26:02.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:02.734 "is_configured": true, 00:26:02.734 "data_offset": 2048, 00:26:02.734 "data_size": 63488 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "name": "pt3", 00:26:02.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:02.734 "is_configured": true, 00:26:02.734 "data_offset": 2048, 00:26:02.734 "data_size": 63488 00:26:02.734 }, 00:26:02.734 { 00:26:02.734 "name": "pt4", 00:26:02.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:02.734 "is_configured": true, 00:26:02.734 "data_offset": 2048, 00:26:02.734 "data_size": 63488 00:26:02.734 } 00:26:02.734 ] 00:26:02.734 } 00:26:02.734 } 00:26:02.734 }' 00:26:02.734 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:02.993 pt2 00:26:02.993 pt3 00:26:02.993 pt4' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.993 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:03.253 [2024-11-20 07:23:27.337407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab5c86b7-74fe-41ab-98c8-1c225e26fca0 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab5c86b7-74fe-41ab-98c8-1c225e26fca0 ']' 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 [2024-11-20 07:23:27.385040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:03.253 [2024-11-20 07:23:27.385284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.253 [2024-11-20 07:23:27.385580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.253 [2024-11-20 07:23:27.385725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.253 [2024-11-20 07:23:27.385752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.253 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.253 [2024-11-20 07:23:27.537084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:03.253 [2024-11-20 07:23:27.539856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:03.254 [2024-11-20 07:23:27.539942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:03.254 [2024-11-20 07:23:27.539996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:03.254 [2024-11-20 07:23:27.540073] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:03.254 [2024-11-20 07:23:27.540156] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:03.254 [2024-11-20 07:23:27.540191] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:03.254 [2024-11-20 07:23:27.540222] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:03.254 [2024-11-20 07:23:27.540245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:03.254 [2024-11-20 07:23:27.540262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:03.514 request: 00:26:03.514 { 00:26:03.514 "name": "raid_bdev1", 00:26:03.514 "raid_level": "raid1", 00:26:03.514 "base_bdevs": [ 00:26:03.514 "malloc1", 00:26:03.514 "malloc2", 00:26:03.514 "malloc3", 00:26:03.514 "malloc4" 00:26:03.514 ], 00:26:03.514 "superblock": false, 00:26:03.514 "method": "bdev_raid_create", 00:26:03.514 "req_id": 1 00:26:03.514 } 00:26:03.514 Got JSON-RPC error response 00:26:03.514 response: 00:26:03.514 { 00:26:03.514 "code": -17, 00:26:03.514 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:03.514 } 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.514 [2024-11-20 07:23:27.589163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:03.514 [2024-11-20 07:23:27.589497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.514 [2024-11-20 07:23:27.589576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:03.514 [2024-11-20 07:23:27.589619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.514 [2024-11-20 07:23:27.592624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.514 [2024-11-20 07:23:27.592680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:03.514 [2024-11-20 07:23:27.592800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:03.514 [2024-11-20 07:23:27.592878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:03.514 pt1 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.514 "name": "raid_bdev1", 00:26:03.514 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:03.514 "strip_size_kb": 0, 00:26:03.514 "state": "configuring", 00:26:03.514 "raid_level": "raid1", 00:26:03.514 "superblock": true, 00:26:03.514 "num_base_bdevs": 4, 00:26:03.514 "num_base_bdevs_discovered": 1, 00:26:03.514 "num_base_bdevs_operational": 4, 00:26:03.514 "base_bdevs_list": [ 00:26:03.514 { 00:26:03.514 "name": "pt1", 00:26:03.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:03.514 "is_configured": true, 00:26:03.514 "data_offset": 2048, 00:26:03.514 "data_size": 63488 00:26:03.514 }, 00:26:03.514 { 00:26:03.514 "name": null, 00:26:03.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:03.514 "is_configured": false, 00:26:03.514 "data_offset": 2048, 00:26:03.514 "data_size": 63488 00:26:03.514 }, 00:26:03.514 { 00:26:03.514 "name": null, 00:26:03.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:03.514 "is_configured": false, 00:26:03.514 "data_offset": 2048, 00:26:03.514 "data_size": 63488 00:26:03.514 }, 00:26:03.514 { 00:26:03.514 "name": null, 00:26:03.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:03.514 "is_configured": false, 00:26:03.514 "data_offset": 2048, 00:26:03.514 "data_size": 63488 00:26:03.514 } 00:26:03.514 ] 00:26:03.514 }' 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.514 07:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.082 [2024-11-20 07:23:28.125303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:04.082 [2024-11-20 07:23:28.125634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.082 [2024-11-20 07:23:28.125674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:04.082 [2024-11-20 07:23:28.125693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.082 [2024-11-20 07:23:28.126260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.082 [2024-11-20 07:23:28.126292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:04.082 [2024-11-20 07:23:28.126397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:04.082 [2024-11-20 07:23:28.126440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:04.082 pt2 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.082 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.082 [2024-11-20 07:23:28.133277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.083 "name": "raid_bdev1", 00:26:04.083 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:04.083 "strip_size_kb": 0, 00:26:04.083 "state": "configuring", 00:26:04.083 "raid_level": "raid1", 00:26:04.083 "superblock": true, 00:26:04.083 "num_base_bdevs": 4, 00:26:04.083 "num_base_bdevs_discovered": 1, 00:26:04.083 "num_base_bdevs_operational": 4, 00:26:04.083 "base_bdevs_list": [ 00:26:04.083 { 00:26:04.083 "name": "pt1", 00:26:04.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:04.083 "is_configured": true, 00:26:04.083 "data_offset": 2048, 00:26:04.083 "data_size": 63488 00:26:04.083 }, 00:26:04.083 { 00:26:04.083 "name": null, 00:26:04.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:04.083 "is_configured": false, 00:26:04.083 "data_offset": 0, 00:26:04.083 "data_size": 63488 00:26:04.083 }, 00:26:04.083 { 00:26:04.083 "name": null, 00:26:04.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:04.083 "is_configured": false, 00:26:04.083 "data_offset": 2048, 00:26:04.083 "data_size": 63488 00:26:04.083 }, 00:26:04.083 { 00:26:04.083 "name": null, 00:26:04.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:04.083 "is_configured": false, 00:26:04.083 "data_offset": 2048, 00:26:04.083 "data_size": 63488 00:26:04.083 } 00:26:04.083 ] 00:26:04.083 }' 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.083 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 [2024-11-20 07:23:28.681425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:04.700 [2024-11-20 07:23:28.681516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.700 [2024-11-20 07:23:28.681557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:04.700 [2024-11-20 07:23:28.681575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.700 [2024-11-20 07:23:28.682174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.700 [2024-11-20 07:23:28.682207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:04.700 [2024-11-20 07:23:28.682322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:04.700 [2024-11-20 07:23:28.682356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:04.700 pt2 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 [2024-11-20 07:23:28.693426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:04.700 [2024-11-20 07:23:28.693758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.700 [2024-11-20 07:23:28.693804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:04.700 [2024-11-20 07:23:28.693821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.700 [2024-11-20 07:23:28.694392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.700 [2024-11-20 07:23:28.694419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:04.700 [2024-11-20 07:23:28.694527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:04.700 [2024-11-20 07:23:28.694565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:04.700 pt3 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 [2024-11-20 07:23:28.705425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:04.700 [2024-11-20 07:23:28.705740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.700 [2024-11-20 07:23:28.705892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:04.700 [2024-11-20 07:23:28.706020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.700 [2024-11-20 07:23:28.706680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.700 [2024-11-20 07:23:28.706847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:04.700 [2024-11-20 07:23:28.707158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:04.700 [2024-11-20 07:23:28.707290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:04.700 [2024-11-20 07:23:28.707548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:04.700 [2024-11-20 07:23:28.707687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:04.700 [2024-11-20 07:23:28.708079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:04.700 [2024-11-20 07:23:28.708399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:04.700 [2024-11-20 07:23:28.708528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:04.700 [2024-11-20 07:23:28.708850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.700 pt4 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.700 "name": "raid_bdev1", 00:26:04.700 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:04.700 "strip_size_kb": 0, 00:26:04.700 "state": "online", 00:26:04.700 "raid_level": "raid1", 00:26:04.700 "superblock": true, 00:26:04.700 "num_base_bdevs": 4, 00:26:04.700 "num_base_bdevs_discovered": 4, 00:26:04.700 "num_base_bdevs_operational": 4, 00:26:04.700 "base_bdevs_list": [ 00:26:04.700 { 00:26:04.700 "name": "pt1", 00:26:04.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:04.700 "is_configured": true, 00:26:04.700 "data_offset": 2048, 00:26:04.700 "data_size": 63488 00:26:04.700 }, 00:26:04.700 { 00:26:04.700 "name": "pt2", 00:26:04.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:04.700 "is_configured": true, 00:26:04.700 "data_offset": 2048, 00:26:04.700 "data_size": 63488 00:26:04.700 }, 00:26:04.700 { 00:26:04.700 "name": "pt3", 00:26:04.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:04.700 "is_configured": true, 00:26:04.700 "data_offset": 2048, 00:26:04.700 "data_size": 63488 00:26:04.700 }, 00:26:04.700 { 00:26:04.700 "name": "pt4", 00:26:04.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:04.700 "is_configured": true, 00:26:04.700 "data_offset": 2048, 00:26:04.700 "data_size": 63488 00:26:04.700 } 00:26:04.700 ] 00:26:04.700 }' 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.700 07:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.960 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:04.960 [2024-11-20 07:23:29.242026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:05.219 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.219 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:05.219 "name": "raid_bdev1", 00:26:05.219 "aliases": [ 00:26:05.219 "ab5c86b7-74fe-41ab-98c8-1c225e26fca0" 00:26:05.219 ], 00:26:05.219 "product_name": "Raid Volume", 00:26:05.219 "block_size": 512, 00:26:05.219 "num_blocks": 63488, 00:26:05.219 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:05.219 "assigned_rate_limits": { 00:26:05.219 "rw_ios_per_sec": 0, 00:26:05.219 "rw_mbytes_per_sec": 0, 00:26:05.219 "r_mbytes_per_sec": 0, 00:26:05.219 "w_mbytes_per_sec": 0 00:26:05.219 }, 00:26:05.219 "claimed": false, 00:26:05.219 "zoned": false, 00:26:05.219 "supported_io_types": { 00:26:05.219 "read": true, 00:26:05.219 "write": true, 00:26:05.219 "unmap": false, 00:26:05.219 "flush": false, 00:26:05.219 "reset": true, 00:26:05.219 "nvme_admin": false, 00:26:05.219 "nvme_io": false, 00:26:05.219 "nvme_io_md": false, 00:26:05.219 "write_zeroes": true, 00:26:05.219 "zcopy": false, 00:26:05.219 "get_zone_info": false, 00:26:05.219 "zone_management": false, 00:26:05.219 "zone_append": false, 00:26:05.219 "compare": false, 00:26:05.219 "compare_and_write": false, 00:26:05.219 "abort": false, 00:26:05.219 "seek_hole": false, 00:26:05.219 "seek_data": false, 00:26:05.219 "copy": false, 00:26:05.219 "nvme_iov_md": false 00:26:05.219 }, 00:26:05.219 "memory_domains": [ 00:26:05.219 { 00:26:05.219 "dma_device_id": "system", 00:26:05.219 "dma_device_type": 1 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.219 "dma_device_type": 2 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "system", 00:26:05.219 "dma_device_type": 1 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.219 "dma_device_type": 2 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "system", 00:26:05.219 "dma_device_type": 1 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.219 "dma_device_type": 2 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "system", 00:26:05.219 "dma_device_type": 1 00:26:05.219 }, 00:26:05.219 { 00:26:05.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.219 "dma_device_type": 2 00:26:05.219 } 00:26:05.219 ], 00:26:05.219 "driver_specific": { 00:26:05.219 "raid": { 00:26:05.219 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:05.219 "strip_size_kb": 0, 00:26:05.219 "state": "online", 00:26:05.219 "raid_level": "raid1", 00:26:05.220 "superblock": true, 00:26:05.220 "num_base_bdevs": 4, 00:26:05.220 "num_base_bdevs_discovered": 4, 00:26:05.220 "num_base_bdevs_operational": 4, 00:26:05.220 "base_bdevs_list": [ 00:26:05.220 { 00:26:05.220 "name": "pt1", 00:26:05.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:05.220 "is_configured": true, 00:26:05.220 "data_offset": 2048, 00:26:05.220 "data_size": 63488 00:26:05.220 }, 00:26:05.220 { 00:26:05.220 "name": "pt2", 00:26:05.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:05.220 "is_configured": true, 00:26:05.220 "data_offset": 2048, 00:26:05.220 "data_size": 63488 00:26:05.220 }, 00:26:05.220 { 00:26:05.220 "name": "pt3", 00:26:05.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:05.220 "is_configured": true, 00:26:05.220 "data_offset": 2048, 00:26:05.220 "data_size": 63488 00:26:05.220 }, 00:26:05.220 { 00:26:05.220 "name": "pt4", 00:26:05.220 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:05.220 "is_configured": true, 00:26:05.220 "data_offset": 2048, 00:26:05.220 "data_size": 63488 00:26:05.220 } 00:26:05.220 ] 00:26:05.220 } 00:26:05.220 } 00:26:05.220 }' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:05.220 pt2 00:26:05.220 pt3 00:26:05.220 pt4' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:05.220 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.479 [2024-11-20 07:23:29.626088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab5c86b7-74fe-41ab-98c8-1c225e26fca0 '!=' ab5c86b7-74fe-41ab-98c8-1c225e26fca0 ']' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.479 [2024-11-20 07:23:29.673795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.479 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.479 "name": "raid_bdev1", 00:26:05.479 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:05.479 "strip_size_kb": 0, 00:26:05.479 "state": "online", 00:26:05.479 "raid_level": "raid1", 00:26:05.480 "superblock": true, 00:26:05.480 "num_base_bdevs": 4, 00:26:05.480 "num_base_bdevs_discovered": 3, 00:26:05.480 "num_base_bdevs_operational": 3, 00:26:05.480 "base_bdevs_list": [ 00:26:05.480 { 00:26:05.480 "name": null, 00:26:05.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.480 "is_configured": false, 00:26:05.480 "data_offset": 0, 00:26:05.480 "data_size": 63488 00:26:05.480 }, 00:26:05.480 { 00:26:05.480 "name": "pt2", 00:26:05.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:05.480 "is_configured": true, 00:26:05.480 "data_offset": 2048, 00:26:05.480 "data_size": 63488 00:26:05.480 }, 00:26:05.480 { 00:26:05.480 "name": "pt3", 00:26:05.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:05.480 "is_configured": true, 00:26:05.480 "data_offset": 2048, 00:26:05.480 "data_size": 63488 00:26:05.480 }, 00:26:05.480 { 00:26:05.480 "name": "pt4", 00:26:05.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:05.480 "is_configured": true, 00:26:05.480 "data_offset": 2048, 00:26:05.480 "data_size": 63488 00:26:05.480 } 00:26:05.480 ] 00:26:05.480 }' 00:26:05.480 07:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.480 07:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 [2024-11-20 07:23:30.237847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:06.047 [2024-11-20 07:23:30.237890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:06.047 [2024-11-20 07:23:30.237993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:06.047 [2024-11-20 07:23:30.238101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:06.047 [2024-11-20 07:23:30.238118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.047 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.047 [2024-11-20 07:23:30.333889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:06.047 [2024-11-20 07:23:30.334105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.047 [2024-11-20 07:23:30.334183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:06.047 [2024-11-20 07:23:30.334397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.306 [2024-11-20 07:23:30.337391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.306 [2024-11-20 07:23:30.337546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:06.306 [2024-11-20 07:23:30.337708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:06.306 [2024-11-20 07:23:30.337771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:06.306 pt2 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.306 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.307 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.307 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.307 "name": "raid_bdev1", 00:26:06.307 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:06.307 "strip_size_kb": 0, 00:26:06.307 "state": "configuring", 00:26:06.307 "raid_level": "raid1", 00:26:06.307 "superblock": true, 00:26:06.307 "num_base_bdevs": 4, 00:26:06.307 "num_base_bdevs_discovered": 1, 00:26:06.307 "num_base_bdevs_operational": 3, 00:26:06.307 "base_bdevs_list": [ 00:26:06.307 { 00:26:06.307 "name": null, 00:26:06.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.307 "is_configured": false, 00:26:06.307 "data_offset": 2048, 00:26:06.307 "data_size": 63488 00:26:06.307 }, 00:26:06.307 { 00:26:06.307 "name": "pt2", 00:26:06.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:06.307 "is_configured": true, 00:26:06.307 "data_offset": 2048, 00:26:06.307 "data_size": 63488 00:26:06.307 }, 00:26:06.307 { 00:26:06.307 "name": null, 00:26:06.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:06.307 "is_configured": false, 00:26:06.307 "data_offset": 2048, 00:26:06.307 "data_size": 63488 00:26:06.307 }, 00:26:06.307 { 00:26:06.307 "name": null, 00:26:06.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:06.307 "is_configured": false, 00:26:06.307 "data_offset": 2048, 00:26:06.307 "data_size": 63488 00:26:06.307 } 00:26:06.307 ] 00:26:06.307 }' 00:26:06.307 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.307 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 [2024-11-20 07:23:30.866163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:06.874 [2024-11-20 07:23:30.866432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.874 [2024-11-20 07:23:30.866511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:06.874 [2024-11-20 07:23:30.866653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.874 [2024-11-20 07:23:30.867316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.874 [2024-11-20 07:23:30.867345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:06.874 [2024-11-20 07:23:30.867461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:06.874 [2024-11-20 07:23:30.867494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:06.874 pt3 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.874 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.874 "name": "raid_bdev1", 00:26:06.874 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:06.874 "strip_size_kb": 0, 00:26:06.874 "state": "configuring", 00:26:06.874 "raid_level": "raid1", 00:26:06.874 "superblock": true, 00:26:06.874 "num_base_bdevs": 4, 00:26:06.874 "num_base_bdevs_discovered": 2, 00:26:06.874 "num_base_bdevs_operational": 3, 00:26:06.874 "base_bdevs_list": [ 00:26:06.874 { 00:26:06.874 "name": null, 00:26:06.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.874 "is_configured": false, 00:26:06.874 "data_offset": 2048, 00:26:06.874 "data_size": 63488 00:26:06.874 }, 00:26:06.874 { 00:26:06.874 "name": "pt2", 00:26:06.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:06.874 "is_configured": true, 00:26:06.874 "data_offset": 2048, 00:26:06.874 "data_size": 63488 00:26:06.874 }, 00:26:06.874 { 00:26:06.874 "name": "pt3", 00:26:06.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:06.874 "is_configured": true, 00:26:06.874 "data_offset": 2048, 00:26:06.874 "data_size": 63488 00:26:06.874 }, 00:26:06.874 { 00:26:06.874 "name": null, 00:26:06.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:06.874 "is_configured": false, 00:26:06.874 "data_offset": 2048, 00:26:06.874 "data_size": 63488 00:26:06.874 } 00:26:06.875 ] 00:26:06.875 }' 00:26:06.875 07:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.875 07:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.133 [2024-11-20 07:23:31.410344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:07.133 [2024-11-20 07:23:31.410671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.133 [2024-11-20 07:23:31.410754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:07.133 [2024-11-20 07:23:31.410909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.133 [2024-11-20 07:23:31.411559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.133 [2024-11-20 07:23:31.411601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:07.133 [2024-11-20 07:23:31.411724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:07.133 [2024-11-20 07:23:31.411765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:07.133 [2024-11-20 07:23:31.411944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:07.133 [2024-11-20 07:23:31.411967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:07.133 [2024-11-20 07:23:31.412288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:07.133 [2024-11-20 07:23:31.412492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:07.133 [2024-11-20 07:23:31.412514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:07.133 [2024-11-20 07:23:31.412712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.133 pt4 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.133 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.392 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.392 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.392 "name": "raid_bdev1", 00:26:07.392 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:07.392 "strip_size_kb": 0, 00:26:07.392 "state": "online", 00:26:07.392 "raid_level": "raid1", 00:26:07.392 "superblock": true, 00:26:07.392 "num_base_bdevs": 4, 00:26:07.392 "num_base_bdevs_discovered": 3, 00:26:07.392 "num_base_bdevs_operational": 3, 00:26:07.392 "base_bdevs_list": [ 00:26:07.392 { 00:26:07.392 "name": null, 00:26:07.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.392 "is_configured": false, 00:26:07.392 "data_offset": 2048, 00:26:07.392 "data_size": 63488 00:26:07.392 }, 00:26:07.392 { 00:26:07.392 "name": "pt2", 00:26:07.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:07.392 "is_configured": true, 00:26:07.392 "data_offset": 2048, 00:26:07.392 "data_size": 63488 00:26:07.392 }, 00:26:07.392 { 00:26:07.392 "name": "pt3", 00:26:07.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:07.392 "is_configured": true, 00:26:07.392 "data_offset": 2048, 00:26:07.392 "data_size": 63488 00:26:07.392 }, 00:26:07.392 { 00:26:07.392 "name": "pt4", 00:26:07.392 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:07.392 "is_configured": true, 00:26:07.392 "data_offset": 2048, 00:26:07.392 "data_size": 63488 00:26:07.392 } 00:26:07.392 ] 00:26:07.392 }' 00:26:07.392 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.392 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.651 [2024-11-20 07:23:31.926391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.651 [2024-11-20 07:23:31.926613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:07.651 [2024-11-20 07:23:31.926744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.651 [2024-11-20 07:23:31.926850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.651 [2024-11-20 07:23:31.926893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.651 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.910 07:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.910 [2024-11-20 07:23:31.998418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:07.910 [2024-11-20 07:23:31.998646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.910 [2024-11-20 07:23:31.998687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:07.910 [2024-11-20 07:23:31.998711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.910 [2024-11-20 07:23:32.001761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.910 [2024-11-20 07:23:32.001815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:07.911 [2024-11-20 07:23:32.001937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:07.911 [2024-11-20 07:23:32.002002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:07.911 [2024-11-20 07:23:32.002169] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:07.911 [2024-11-20 07:23:32.002192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.911 [2024-11-20 07:23:32.002215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:07.911 [2024-11-20 07:23:32.002297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:07.911 [2024-11-20 07:23:32.002446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:07.911 pt1 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.911 "name": "raid_bdev1", 00:26:07.911 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:07.911 "strip_size_kb": 0, 00:26:07.911 "state": "configuring", 00:26:07.911 "raid_level": "raid1", 00:26:07.911 "superblock": true, 00:26:07.911 "num_base_bdevs": 4, 00:26:07.911 "num_base_bdevs_discovered": 2, 00:26:07.911 "num_base_bdevs_operational": 3, 00:26:07.911 "base_bdevs_list": [ 00:26:07.911 { 00:26:07.911 "name": null, 00:26:07.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.911 "is_configured": false, 00:26:07.911 "data_offset": 2048, 00:26:07.911 "data_size": 63488 00:26:07.911 }, 00:26:07.911 { 00:26:07.911 "name": "pt2", 00:26:07.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:07.911 "is_configured": true, 00:26:07.911 "data_offset": 2048, 00:26:07.911 "data_size": 63488 00:26:07.911 }, 00:26:07.911 { 00:26:07.911 "name": "pt3", 00:26:07.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:07.911 "is_configured": true, 00:26:07.911 "data_offset": 2048, 00:26:07.911 "data_size": 63488 00:26:07.911 }, 00:26:07.911 { 00:26:07.911 "name": null, 00:26:07.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:07.911 "is_configured": false, 00:26:07.911 "data_offset": 2048, 00:26:07.911 "data_size": 63488 00:26:07.911 } 00:26:07.911 ] 00:26:07.911 }' 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.911 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.478 [2024-11-20 07:23:32.598744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:08.478 [2024-11-20 07:23:32.599036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.478 [2024-11-20 07:23:32.599098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:08.478 [2024-11-20 07:23:32.599122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.478 [2024-11-20 07:23:32.599709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.478 [2024-11-20 07:23:32.599736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:08.478 [2024-11-20 07:23:32.599851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:08.478 [2024-11-20 07:23:32.599892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:08.478 [2024-11-20 07:23:32.600066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:08.478 [2024-11-20 07:23:32.600083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:08.478 [2024-11-20 07:23:32.600403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:08.478 [2024-11-20 07:23:32.600605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:08.478 [2024-11-20 07:23:32.600628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:08.478 [2024-11-20 07:23:32.600819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.478 pt4 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.478 "name": "raid_bdev1", 00:26:08.478 "uuid": "ab5c86b7-74fe-41ab-98c8-1c225e26fca0", 00:26:08.478 "strip_size_kb": 0, 00:26:08.478 "state": "online", 00:26:08.478 "raid_level": "raid1", 00:26:08.478 "superblock": true, 00:26:08.478 "num_base_bdevs": 4, 00:26:08.478 "num_base_bdevs_discovered": 3, 00:26:08.478 "num_base_bdevs_operational": 3, 00:26:08.478 "base_bdevs_list": [ 00:26:08.478 { 00:26:08.478 "name": null, 00:26:08.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.478 "is_configured": false, 00:26:08.478 "data_offset": 2048, 00:26:08.478 "data_size": 63488 00:26:08.478 }, 00:26:08.478 { 00:26:08.478 "name": "pt2", 00:26:08.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:08.478 "is_configured": true, 00:26:08.478 "data_offset": 2048, 00:26:08.478 "data_size": 63488 00:26:08.478 }, 00:26:08.478 { 00:26:08.478 "name": "pt3", 00:26:08.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:08.478 "is_configured": true, 00:26:08.478 "data_offset": 2048, 00:26:08.478 "data_size": 63488 00:26:08.478 }, 00:26:08.478 { 00:26:08.478 "name": "pt4", 00:26:08.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:08.478 "is_configured": true, 00:26:08.478 "data_offset": 2048, 00:26:08.478 "data_size": 63488 00:26:08.478 } 00:26:08.478 ] 00:26:08.478 }' 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.478 07:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.047 [2024-11-20 07:23:33.147285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab5c86b7-74fe-41ab-98c8-1c225e26fca0 '!=' ab5c86b7-74fe-41ab-98c8-1c225e26fca0 ']' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74897 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74897 ']' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74897 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74897 00:26:09.047 killing process with pid 74897 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74897' 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74897 00:26:09.047 [2024-11-20 07:23:33.226949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.047 07:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74897 00:26:09.047 [2024-11-20 07:23:33.227080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.047 [2024-11-20 07:23:33.227191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.047 [2024-11-20 07:23:33.227212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:09.306 [2024-11-20 07:23:33.592383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:10.683 07:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:10.683 00:26:10.683 real 0m9.572s 00:26:10.683 user 0m15.633s 00:26:10.683 sys 0m1.463s 00:26:10.683 07:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.683 07:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.683 ************************************ 00:26:10.683 END TEST raid_superblock_test 00:26:10.683 ************************************ 00:26:10.683 07:23:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:10.683 07:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:10.683 07:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.683 07:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.683 ************************************ 00:26:10.683 START TEST raid_read_error_test 00:26:10.683 ************************************ 00:26:10.683 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:26:10.683 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:10.683 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:26:10.683 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:10.683 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:10.683 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Rw5LkrlxBD 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75395 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75395 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75395 ']' 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.684 07:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.684 [2024-11-20 07:23:34.843425] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:10.684 [2024-11-20 07:23:34.843614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75395 ] 00:26:10.942 [2024-11-20 07:23:35.025188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.942 [2024-11-20 07:23:35.160197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.201 [2024-11-20 07:23:35.365700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.201 [2024-11-20 07:23:35.365782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 BaseBdev1_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 true 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 [2024-11-20 07:23:35.896996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:11.768 [2024-11-20 07:23:35.897302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.768 [2024-11-20 07:23:35.897350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:11.768 [2024-11-20 07:23:35.897371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.768 [2024-11-20 07:23:35.900425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.768 BaseBdev1 00:26:11.768 [2024-11-20 07:23:35.900671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 BaseBdev2_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 true 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 [2024-11-20 07:23:35.957548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:11.768 [2024-11-20 07:23:35.957762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.768 [2024-11-20 07:23:35.957938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:11.768 [2024-11-20 07:23:35.957971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.768 [2024-11-20 07:23:35.960898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.768 [2024-11-20 07:23:35.960949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:11.768 BaseBdev2 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 BaseBdev3_malloc 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 true 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.768 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.768 [2024-11-20 07:23:36.035145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:11.768 [2024-11-20 07:23:36.035349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.768 [2024-11-20 07:23:36.035390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:11.768 [2024-11-20 07:23:36.035409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.768 [2024-11-20 07:23:36.038268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.768 [2024-11-20 07:23:36.038319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:11.768 BaseBdev3 00:26:11.769 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.769 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:11.769 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:11.769 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.769 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.028 BaseBdev4_malloc 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.028 true 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.028 [2024-11-20 07:23:36.092089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:12.028 [2024-11-20 07:23:36.092344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.028 [2024-11-20 07:23:36.092389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:12.028 [2024-11-20 07:23:36.092409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.028 [2024-11-20 07:23:36.095522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.028 [2024-11-20 07:23:36.095614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:12.028 BaseBdev4 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.028 [2024-11-20 07:23:36.104483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.028 [2024-11-20 07:23:36.107089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.028 [2024-11-20 07:23:36.107219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:12.028 [2024-11-20 07:23:36.107328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:12.028 [2024-11-20 07:23:36.107844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:26:12.028 [2024-11-20 07:23:36.107972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:12.028 [2024-11-20 07:23:36.108465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:26:12.028 [2024-11-20 07:23:36.108856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:26:12.028 [2024-11-20 07:23:36.108990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:26:12.028 [2024-11-20 07:23:36.109305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.028 "name": "raid_bdev1", 00:26:12.028 "uuid": "093b68a0-d7e2-4950-9796-2c6b4e7ab74d", 00:26:12.028 "strip_size_kb": 0, 00:26:12.028 "state": "online", 00:26:12.028 "raid_level": "raid1", 00:26:12.028 "superblock": true, 00:26:12.028 "num_base_bdevs": 4, 00:26:12.028 "num_base_bdevs_discovered": 4, 00:26:12.028 "num_base_bdevs_operational": 4, 00:26:12.028 "base_bdevs_list": [ 00:26:12.028 { 00:26:12.028 "name": "BaseBdev1", 00:26:12.028 "uuid": "fbf64779-2505-58d7-9199-1180df6ea022", 00:26:12.028 "is_configured": true, 00:26:12.028 "data_offset": 2048, 00:26:12.028 "data_size": 63488 00:26:12.028 }, 00:26:12.028 { 00:26:12.028 "name": "BaseBdev2", 00:26:12.028 "uuid": "479f8c85-6ae2-55aa-888a-779c87a5435c", 00:26:12.028 "is_configured": true, 00:26:12.028 "data_offset": 2048, 00:26:12.028 "data_size": 63488 00:26:12.028 }, 00:26:12.028 { 00:26:12.028 "name": "BaseBdev3", 00:26:12.028 "uuid": "26ee49c1-9e54-5339-bcd9-0541718e68f1", 00:26:12.028 "is_configured": true, 00:26:12.028 "data_offset": 2048, 00:26:12.028 "data_size": 63488 00:26:12.028 }, 00:26:12.028 { 00:26:12.028 "name": "BaseBdev4", 00:26:12.028 "uuid": "e03972ea-3a81-542d-935a-c9060a5d25a4", 00:26:12.028 "is_configured": true, 00:26:12.028 "data_offset": 2048, 00:26:12.028 "data_size": 63488 00:26:12.028 } 00:26:12.028 ] 00:26:12.028 }' 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.028 07:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.596 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:12.596 07:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:12.596 [2024-11-20 07:23:36.794889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.532 "name": "raid_bdev1", 00:26:13.532 "uuid": "093b68a0-d7e2-4950-9796-2c6b4e7ab74d", 00:26:13.532 "strip_size_kb": 0, 00:26:13.532 "state": "online", 00:26:13.532 "raid_level": "raid1", 00:26:13.532 "superblock": true, 00:26:13.532 "num_base_bdevs": 4, 00:26:13.532 "num_base_bdevs_discovered": 4, 00:26:13.532 "num_base_bdevs_operational": 4, 00:26:13.532 "base_bdevs_list": [ 00:26:13.532 { 00:26:13.532 "name": "BaseBdev1", 00:26:13.532 "uuid": "fbf64779-2505-58d7-9199-1180df6ea022", 00:26:13.532 "is_configured": true, 00:26:13.532 "data_offset": 2048, 00:26:13.532 "data_size": 63488 00:26:13.532 }, 00:26:13.532 { 00:26:13.532 "name": "BaseBdev2", 00:26:13.532 "uuid": "479f8c85-6ae2-55aa-888a-779c87a5435c", 00:26:13.532 "is_configured": true, 00:26:13.532 "data_offset": 2048, 00:26:13.532 "data_size": 63488 00:26:13.532 }, 00:26:13.532 { 00:26:13.532 "name": "BaseBdev3", 00:26:13.532 "uuid": "26ee49c1-9e54-5339-bcd9-0541718e68f1", 00:26:13.532 "is_configured": true, 00:26:13.532 "data_offset": 2048, 00:26:13.532 "data_size": 63488 00:26:13.532 }, 00:26:13.532 { 00:26:13.532 "name": "BaseBdev4", 00:26:13.532 "uuid": "e03972ea-3a81-542d-935a-c9060a5d25a4", 00:26:13.532 "is_configured": true, 00:26:13.532 "data_offset": 2048, 00:26:13.532 "data_size": 63488 00:26:13.532 } 00:26:13.532 ] 00:26:13.532 }' 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.532 07:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.100 [2024-11-20 07:23:38.221575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:14.100 [2024-11-20 07:23:38.221634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:14.100 [2024-11-20 07:23:38.225083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:14.100 { 00:26:14.100 "results": [ 00:26:14.100 { 00:26:14.100 "job": "raid_bdev1", 00:26:14.100 "core_mask": "0x1", 00:26:14.100 "workload": "randrw", 00:26:14.100 "percentage": 50, 00:26:14.100 "status": "finished", 00:26:14.100 "queue_depth": 1, 00:26:14.100 "io_size": 131072, 00:26:14.100 "runtime": 1.424036, 00:26:14.100 "iops": 7079.877194115879, 00:26:14.100 "mibps": 884.9846492644849, 00:26:14.100 "io_failed": 0, 00:26:14.100 "io_timeout": 0, 00:26:14.100 "avg_latency_us": 136.99981893924365, 00:26:14.100 "min_latency_us": 44.45090909090909, 00:26:14.100 "max_latency_us": 1995.8690909090908 00:26:14.100 } 00:26:14.100 ], 00:26:14.100 "core_count": 1 00:26:14.100 } 00:26:14.100 [2024-11-20 07:23:38.225329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.100 [2024-11-20 07:23:38.225576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:14.100 [2024-11-20 07:23:38.225618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75395 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75395 ']' 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75395 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75395 00:26:14.100 killing process with pid 75395 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75395' 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75395 00:26:14.100 07:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75395 00:26:14.100 [2024-11-20 07:23:38.264050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:14.358 [2024-11-20 07:23:38.567336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Rw5LkrlxBD 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:15.735 00:26:15.735 real 0m4.974s 00:26:15.735 user 0m6.137s 00:26:15.735 sys 0m0.645s 00:26:15.735 ************************************ 00:26:15.735 END TEST raid_read_error_test 00:26:15.735 ************************************ 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.735 07:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.735 07:23:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:15.735 07:23:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:15.735 07:23:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.735 07:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:15.735 ************************************ 00:26:15.735 START TEST raid_write_error_test 00:26:15.735 ************************************ 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sjia9DNjH9 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75542 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75542 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75542 ']' 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.735 07:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.735 [2024-11-20 07:23:39.863180] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:15.735 [2024-11-20 07:23:39.863668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75542 ] 00:26:15.994 [2024-11-20 07:23:40.049876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.994 [2024-11-20 07:23:40.183778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.299 [2024-11-20 07:23:40.388499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:16.299 [2024-11-20 07:23:40.388549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.595 BaseBdev1_malloc 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.595 true 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.595 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 [2024-11-20 07:23:40.884210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:16.854 [2024-11-20 07:23:40.884442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.854 [2024-11-20 07:23:40.884490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:16.854 [2024-11-20 07:23:40.884511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.854 [2024-11-20 07:23:40.887459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.854 [2024-11-20 07:23:40.887670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:16.854 BaseBdev1 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 BaseBdev2_malloc 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 true 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 [2024-11-20 07:23:40.944910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:16.854 [2024-11-20 07:23:40.945003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.854 [2024-11-20 07:23:40.945037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:16.854 [2024-11-20 07:23:40.945055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.854 [2024-11-20 07:23:40.948154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.854 [2024-11-20 07:23:40.948223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:16.854 BaseBdev2 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 BaseBdev3_malloc 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 true 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 [2024-11-20 07:23:41.012720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:16.854 [2024-11-20 07:23:41.012933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.854 [2024-11-20 07:23:41.012976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:16.854 [2024-11-20 07:23:41.012996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.854 [2024-11-20 07:23:41.016039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.854 [2024-11-20 07:23:41.016266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:16.854 BaseBdev3 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 BaseBdev4_malloc 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 true 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.854 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.854 [2024-11-20 07:23:41.069730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:16.854 [2024-11-20 07:23:41.069936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.854 [2024-11-20 07:23:41.070012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:16.854 [2024-11-20 07:23:41.070227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.855 [2024-11-20 07:23:41.073254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.855 [2024-11-20 07:23:41.073311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:16.855 BaseBdev4 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.855 [2024-11-20 07:23:41.077804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:16.855 [2024-11-20 07:23:41.080434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:16.855 [2024-11-20 07:23:41.080703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.855 [2024-11-20 07:23:41.080946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:16.855 [2024-11-20 07:23:41.081397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:26:16.855 [2024-11-20 07:23:41.081428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:16.855 [2024-11-20 07:23:41.081785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:26:16.855 [2024-11-20 07:23:41.082025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:26:16.855 [2024-11-20 07:23:41.082043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:26:16.855 [2024-11-20 07:23:41.082323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.855 "name": "raid_bdev1", 00:26:16.855 "uuid": "50635a80-800e-4969-9103-e9437969a87a", 00:26:16.855 "strip_size_kb": 0, 00:26:16.855 "state": "online", 00:26:16.855 "raid_level": "raid1", 00:26:16.855 "superblock": true, 00:26:16.855 "num_base_bdevs": 4, 00:26:16.855 "num_base_bdevs_discovered": 4, 00:26:16.855 "num_base_bdevs_operational": 4, 00:26:16.855 "base_bdevs_list": [ 00:26:16.855 { 00:26:16.855 "name": "BaseBdev1", 00:26:16.855 "uuid": "8fdb4cc2-ccf2-54c5-9582-581f7a9a13d9", 00:26:16.855 "is_configured": true, 00:26:16.855 "data_offset": 2048, 00:26:16.855 "data_size": 63488 00:26:16.855 }, 00:26:16.855 { 00:26:16.855 "name": "BaseBdev2", 00:26:16.855 "uuid": "b047fd2f-553a-5f5f-b337-7efd94949717", 00:26:16.855 "is_configured": true, 00:26:16.855 "data_offset": 2048, 00:26:16.855 "data_size": 63488 00:26:16.855 }, 00:26:16.855 { 00:26:16.855 "name": "BaseBdev3", 00:26:16.855 "uuid": "5f6e8583-ea09-5924-aeb5-03181d910b11", 00:26:16.855 "is_configured": true, 00:26:16.855 "data_offset": 2048, 00:26:16.855 "data_size": 63488 00:26:16.855 }, 00:26:16.855 { 00:26:16.855 "name": "BaseBdev4", 00:26:16.855 "uuid": "2cf7fe09-2469-56c9-8831-72570ea3028a", 00:26:16.855 "is_configured": true, 00:26:16.855 "data_offset": 2048, 00:26:16.855 "data_size": 63488 00:26:16.855 } 00:26:16.855 ] 00:26:16.855 }' 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.855 07:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.423 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:17.423 07:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:17.681 [2024-11-20 07:23:41.755880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.616 [2024-11-20 07:23:42.631557] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:18.616 [2024-11-20 07:23:42.631643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:18.616 [2024-11-20 07:23:42.631960] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.616 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.616 "name": "raid_bdev1", 00:26:18.616 "uuid": "50635a80-800e-4969-9103-e9437969a87a", 00:26:18.617 "strip_size_kb": 0, 00:26:18.617 "state": "online", 00:26:18.617 "raid_level": "raid1", 00:26:18.617 "superblock": true, 00:26:18.617 "num_base_bdevs": 4, 00:26:18.617 "num_base_bdevs_discovered": 3, 00:26:18.617 "num_base_bdevs_operational": 3, 00:26:18.617 "base_bdevs_list": [ 00:26:18.617 { 00:26:18.617 "name": null, 00:26:18.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.617 "is_configured": false, 00:26:18.617 "data_offset": 0, 00:26:18.617 "data_size": 63488 00:26:18.617 }, 00:26:18.617 { 00:26:18.617 "name": "BaseBdev2", 00:26:18.617 "uuid": "b047fd2f-553a-5f5f-b337-7efd94949717", 00:26:18.617 "is_configured": true, 00:26:18.617 "data_offset": 2048, 00:26:18.617 "data_size": 63488 00:26:18.617 }, 00:26:18.617 { 00:26:18.617 "name": "BaseBdev3", 00:26:18.617 "uuid": "5f6e8583-ea09-5924-aeb5-03181d910b11", 00:26:18.617 "is_configured": true, 00:26:18.617 "data_offset": 2048, 00:26:18.617 "data_size": 63488 00:26:18.617 }, 00:26:18.617 { 00:26:18.617 "name": "BaseBdev4", 00:26:18.617 "uuid": "2cf7fe09-2469-56c9-8831-72570ea3028a", 00:26:18.617 "is_configured": true, 00:26:18.617 "data_offset": 2048, 00:26:18.617 "data_size": 63488 00:26:18.617 } 00:26:18.617 ] 00:26:18.617 }' 00:26:18.617 07:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.617 07:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.875 07:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:18.875 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.875 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.875 [2024-11-20 07:23:43.163440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.134 [2024-11-20 07:23:43.163635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.134 [2024-11-20 07:23:43.167267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.134 [2024-11-20 07:23:43.167538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:19.134 [2024-11-20 07:23:43.168350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:26:19.134 "results": [ 00:26:19.134 { 00:26:19.134 "job": "raid_bdev1", 00:26:19.134 "core_mask": "0x1", 00:26:19.134 "workload": "randrw", 00:26:19.134 "percentage": 50, 00:26:19.134 "status": "finished", 00:26:19.134 "queue_depth": 1, 00:26:19.134 "io_size": 131072, 00:26:19.134 "runtime": 1.404989, 00:26:19.134 "iops": 7622.835481274231, 00:26:19.134 "mibps": 952.8544351592789, 00:26:19.134 "io_failed": 0, 00:26:19.134 "io_timeout": 0, 00:26:19.134 "avg_latency_us": 126.81180884475003, 00:26:19.134 "min_latency_us": 44.45090909090909, 00:26:19.134 "max_latency_us": 1854.370909090909 00:26:19.134 } 00:26:19.135 ], 00:26:19.135 "core_count": 1 00:26:19.135 } 00:26:19.135 ee all in destruct 00:26:19.135 [2024-11-20 07:23:43.168549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75542 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75542 ']' 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75542 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75542 00:26:19.135 killing process with pid 75542 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75542' 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75542 00:26:19.135 07:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75542 00:26:19.135 [2024-11-20 07:23:43.201420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:19.393 [2024-11-20 07:23:43.503555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sjia9DNjH9 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:20.770 00:26:20.770 real 0m4.903s 00:26:20.770 user 0m6.041s 00:26:20.770 sys 0m0.593s 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.770 ************************************ 00:26:20.770 END TEST raid_write_error_test 00:26:20.770 ************************************ 00:26:20.770 07:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.770 07:23:44 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:26:20.770 07:23:44 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:26:20.770 07:23:44 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:20.770 07:23:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:20.770 07:23:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.770 07:23:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.770 ************************************ 00:26:20.770 START TEST raid_rebuild_test 00:26:20.770 ************************************ 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75686 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75686 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75686 ']' 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.770 07:23:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.770 [2024-11-20 07:23:44.802839] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:20.770 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:20.770 Zero copy mechanism will not be used. 00:26:20.770 [2024-11-20 07:23:44.803230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75686 ] 00:26:20.770 [2024-11-20 07:23:44.977196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.029 [2024-11-20 07:23:45.157920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.288 [2024-11-20 07:23:45.372903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.288 [2024-11-20 07:23:45.372982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.546 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.546 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:26:21.546 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:21.546 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:21.546 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.546 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.806 BaseBdev1_malloc 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.806 [2024-11-20 07:23:45.869825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:21.806 [2024-11-20 07:23:45.869923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.806 [2024-11-20 07:23:45.869970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:21.806 [2024-11-20 07:23:45.869988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.806 [2024-11-20 07:23:45.872881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.806 [2024-11-20 07:23:45.872934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:21.806 BaseBdev1 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.806 BaseBdev2_malloc 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.806 [2024-11-20 07:23:45.923916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:21.806 [2024-11-20 07:23:45.924234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.806 [2024-11-20 07:23:45.924276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:21.806 [2024-11-20 07:23:45.924298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.806 [2024-11-20 07:23:45.927249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.806 [2024-11-20 07:23:45.927313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:21.806 BaseBdev2 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:21.806 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.807 spare_malloc 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.807 spare_delay 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.807 07:23:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.807 [2024-11-20 07:23:46.002277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:21.807 [2024-11-20 07:23:46.002483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.807 [2024-11-20 07:23:46.002522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:21.807 [2024-11-20 07:23:46.002541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.807 [2024-11-20 07:23:46.005398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.807 [2024-11-20 07:23:46.005450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:21.807 spare 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.807 [2024-11-20 07:23:46.010416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:21.807 [2024-11-20 07:23:46.012846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:21.807 [2024-11-20 07:23:46.013137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:21.807 [2024-11-20 07:23:46.013168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:21.807 [2024-11-20 07:23:46.013501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:21.807 [2024-11-20 07:23:46.013759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:21.807 [2024-11-20 07:23:46.013779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:21.807 [2024-11-20 07:23:46.014001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.807 "name": "raid_bdev1", 00:26:21.807 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:21.807 "strip_size_kb": 0, 00:26:21.807 "state": "online", 00:26:21.807 "raid_level": "raid1", 00:26:21.807 "superblock": false, 00:26:21.807 "num_base_bdevs": 2, 00:26:21.807 "num_base_bdevs_discovered": 2, 00:26:21.807 "num_base_bdevs_operational": 2, 00:26:21.807 "base_bdevs_list": [ 00:26:21.807 { 00:26:21.807 "name": "BaseBdev1", 00:26:21.807 "uuid": "69c339c5-bf86-5801-b961-87d0135cd07b", 00:26:21.807 "is_configured": true, 00:26:21.807 "data_offset": 0, 00:26:21.807 "data_size": 65536 00:26:21.807 }, 00:26:21.807 { 00:26:21.807 "name": "BaseBdev2", 00:26:21.807 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:21.807 "is_configured": true, 00:26:21.807 "data_offset": 0, 00:26:21.807 "data_size": 65536 00:26:21.807 } 00:26:21.807 ] 00:26:21.807 }' 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.807 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.375 [2024-11-20 07:23:46.538996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.375 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:22.633 [2024-11-20 07:23:46.910830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:22.892 /dev/nbd0 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.892 1+0 records in 00:26:22.892 1+0 records out 00:26:22.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395969 s, 10.3 MB/s 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:26:22.892 07:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:29.456 65536+0 records in 00:26:29.456 65536+0 records out 00:26:29.456 33554432 bytes (34 MB, 32 MiB) copied, 6.23702 s, 5.4 MB/s 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:29.456 [2024-11-20 07:23:53.502964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.456 [2024-11-20 07:23:53.531044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:29.456 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.457 "name": "raid_bdev1", 00:26:29.457 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:29.457 "strip_size_kb": 0, 00:26:29.457 "state": "online", 00:26:29.457 "raid_level": "raid1", 00:26:29.457 "superblock": false, 00:26:29.457 "num_base_bdevs": 2, 00:26:29.457 "num_base_bdevs_discovered": 1, 00:26:29.457 "num_base_bdevs_operational": 1, 00:26:29.457 "base_bdevs_list": [ 00:26:29.457 { 00:26:29.457 "name": null, 00:26:29.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.457 "is_configured": false, 00:26:29.457 "data_offset": 0, 00:26:29.457 "data_size": 65536 00:26:29.457 }, 00:26:29.457 { 00:26:29.457 "name": "BaseBdev2", 00:26:29.457 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:29.457 "is_configured": true, 00:26:29.457 "data_offset": 0, 00:26:29.457 "data_size": 65536 00:26:29.457 } 00:26:29.457 ] 00:26:29.457 }' 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.457 07:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.024 07:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:30.024 07:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.024 07:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.024 [2024-11-20 07:23:54.055242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:30.024 [2024-11-20 07:23:54.072636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:26:30.024 07:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.024 07:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:30.024 [2024-11-20 07:23:54.075098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.958 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:30.958 "name": "raid_bdev1", 00:26:30.958 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:30.958 "strip_size_kb": 0, 00:26:30.958 "state": "online", 00:26:30.958 "raid_level": "raid1", 00:26:30.958 "superblock": false, 00:26:30.959 "num_base_bdevs": 2, 00:26:30.959 "num_base_bdevs_discovered": 2, 00:26:30.959 "num_base_bdevs_operational": 2, 00:26:30.959 "process": { 00:26:30.959 "type": "rebuild", 00:26:30.959 "target": "spare", 00:26:30.959 "progress": { 00:26:30.959 "blocks": 20480, 00:26:30.959 "percent": 31 00:26:30.959 } 00:26:30.959 }, 00:26:30.959 "base_bdevs_list": [ 00:26:30.959 { 00:26:30.959 "name": "spare", 00:26:30.959 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:30.959 "is_configured": true, 00:26:30.959 "data_offset": 0, 00:26:30.959 "data_size": 65536 00:26:30.959 }, 00:26:30.959 { 00:26:30.959 "name": "BaseBdev2", 00:26:30.959 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:30.959 "is_configured": true, 00:26:30.959 "data_offset": 0, 00:26:30.959 "data_size": 65536 00:26:30.959 } 00:26:30.959 ] 00:26:30.959 }' 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.959 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.959 [2024-11-20 07:23:55.244697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:31.217 [2024-11-20 07:23:55.283818] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:31.217 [2024-11-20 07:23:55.284057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.217 [2024-11-20 07:23:55.284091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:31.217 [2024-11-20 07:23:55.284109] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.217 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.218 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.218 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.218 "name": "raid_bdev1", 00:26:31.218 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:31.218 "strip_size_kb": 0, 00:26:31.218 "state": "online", 00:26:31.218 "raid_level": "raid1", 00:26:31.218 "superblock": false, 00:26:31.218 "num_base_bdevs": 2, 00:26:31.218 "num_base_bdevs_discovered": 1, 00:26:31.218 "num_base_bdevs_operational": 1, 00:26:31.218 "base_bdevs_list": [ 00:26:31.218 { 00:26:31.218 "name": null, 00:26:31.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.218 "is_configured": false, 00:26:31.218 "data_offset": 0, 00:26:31.218 "data_size": 65536 00:26:31.218 }, 00:26:31.218 { 00:26:31.218 "name": "BaseBdev2", 00:26:31.218 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:31.218 "is_configured": true, 00:26:31.218 "data_offset": 0, 00:26:31.218 "data_size": 65536 00:26:31.218 } 00:26:31.218 ] 00:26:31.218 }' 00:26:31.218 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.218 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.784 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:31.784 "name": "raid_bdev1", 00:26:31.784 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:31.784 "strip_size_kb": 0, 00:26:31.784 "state": "online", 00:26:31.784 "raid_level": "raid1", 00:26:31.784 "superblock": false, 00:26:31.784 "num_base_bdevs": 2, 00:26:31.784 "num_base_bdevs_discovered": 1, 00:26:31.784 "num_base_bdevs_operational": 1, 00:26:31.784 "base_bdevs_list": [ 00:26:31.784 { 00:26:31.784 "name": null, 00:26:31.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.785 "is_configured": false, 00:26:31.785 "data_offset": 0, 00:26:31.785 "data_size": 65536 00:26:31.785 }, 00:26:31.785 { 00:26:31.785 "name": "BaseBdev2", 00:26:31.785 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:31.785 "is_configured": true, 00:26:31.785 "data_offset": 0, 00:26:31.785 "data_size": 65536 00:26:31.785 } 00:26:31.785 ] 00:26:31.785 }' 00:26:31.785 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:31.785 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:31.785 07:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:31.785 07:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:31.785 07:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:31.785 07:23:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.785 07:23:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.785 [2024-11-20 07:23:56.034080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:31.785 [2024-11-20 07:23:56.049470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:26:31.785 07:23:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.785 07:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:31.785 [2024-11-20 07:23:56.052104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:33.160 "name": "raid_bdev1", 00:26:33.160 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:33.160 "strip_size_kb": 0, 00:26:33.160 "state": "online", 00:26:33.160 "raid_level": "raid1", 00:26:33.160 "superblock": false, 00:26:33.160 "num_base_bdevs": 2, 00:26:33.160 "num_base_bdevs_discovered": 2, 00:26:33.160 "num_base_bdevs_operational": 2, 00:26:33.160 "process": { 00:26:33.160 "type": "rebuild", 00:26:33.160 "target": "spare", 00:26:33.160 "progress": { 00:26:33.160 "blocks": 20480, 00:26:33.160 "percent": 31 00:26:33.160 } 00:26:33.160 }, 00:26:33.160 "base_bdevs_list": [ 00:26:33.160 { 00:26:33.160 "name": "spare", 00:26:33.160 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:33.160 "is_configured": true, 00:26:33.160 "data_offset": 0, 00:26:33.160 "data_size": 65536 00:26:33.160 }, 00:26:33.160 { 00:26:33.160 "name": "BaseBdev2", 00:26:33.160 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:33.160 "is_configured": true, 00:26:33.160 "data_offset": 0, 00:26:33.160 "data_size": 65536 00:26:33.160 } 00:26:33.160 ] 00:26:33.160 }' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=401 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:33.160 "name": "raid_bdev1", 00:26:33.160 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:33.160 "strip_size_kb": 0, 00:26:33.160 "state": "online", 00:26:33.160 "raid_level": "raid1", 00:26:33.160 "superblock": false, 00:26:33.160 "num_base_bdevs": 2, 00:26:33.160 "num_base_bdevs_discovered": 2, 00:26:33.160 "num_base_bdevs_operational": 2, 00:26:33.160 "process": { 00:26:33.160 "type": "rebuild", 00:26:33.160 "target": "spare", 00:26:33.160 "progress": { 00:26:33.160 "blocks": 22528, 00:26:33.160 "percent": 34 00:26:33.160 } 00:26:33.160 }, 00:26:33.160 "base_bdevs_list": [ 00:26:33.160 { 00:26:33.160 "name": "spare", 00:26:33.160 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:33.160 "is_configured": true, 00:26:33.160 "data_offset": 0, 00:26:33.160 "data_size": 65536 00:26:33.160 }, 00:26:33.160 { 00:26:33.160 "name": "BaseBdev2", 00:26:33.160 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:33.160 "is_configured": true, 00:26:33.160 "data_offset": 0, 00:26:33.160 "data_size": 65536 00:26:33.160 } 00:26:33.160 ] 00:26:33.160 }' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:33.160 07:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:34.535 "name": "raid_bdev1", 00:26:34.535 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:34.535 "strip_size_kb": 0, 00:26:34.535 "state": "online", 00:26:34.535 "raid_level": "raid1", 00:26:34.535 "superblock": false, 00:26:34.535 "num_base_bdevs": 2, 00:26:34.535 "num_base_bdevs_discovered": 2, 00:26:34.535 "num_base_bdevs_operational": 2, 00:26:34.535 "process": { 00:26:34.535 "type": "rebuild", 00:26:34.535 "target": "spare", 00:26:34.535 "progress": { 00:26:34.535 "blocks": 47104, 00:26:34.535 "percent": 71 00:26:34.535 } 00:26:34.535 }, 00:26:34.535 "base_bdevs_list": [ 00:26:34.535 { 00:26:34.535 "name": "spare", 00:26:34.535 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:34.535 "is_configured": true, 00:26:34.535 "data_offset": 0, 00:26:34.535 "data_size": 65536 00:26:34.535 }, 00:26:34.535 { 00:26:34.535 "name": "BaseBdev2", 00:26:34.535 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:34.535 "is_configured": true, 00:26:34.535 "data_offset": 0, 00:26:34.535 "data_size": 65536 00:26:34.535 } 00:26:34.535 ] 00:26:34.535 }' 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.535 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:34.536 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.536 07:23:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:35.101 [2024-11-20 07:23:59.276116] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:35.101 [2024-11-20 07:23:59.276219] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:35.101 [2024-11-20 07:23:59.276299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.359 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:35.359 "name": "raid_bdev1", 00:26:35.360 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:35.360 "strip_size_kb": 0, 00:26:35.360 "state": "online", 00:26:35.360 "raid_level": "raid1", 00:26:35.360 "superblock": false, 00:26:35.360 "num_base_bdevs": 2, 00:26:35.360 "num_base_bdevs_discovered": 2, 00:26:35.360 "num_base_bdevs_operational": 2, 00:26:35.360 "base_bdevs_list": [ 00:26:35.360 { 00:26:35.360 "name": "spare", 00:26:35.360 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:35.360 "is_configured": true, 00:26:35.360 "data_offset": 0, 00:26:35.360 "data_size": 65536 00:26:35.360 }, 00:26:35.360 { 00:26:35.360 "name": "BaseBdev2", 00:26:35.360 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:35.360 "is_configured": true, 00:26:35.360 "data_offset": 0, 00:26:35.360 "data_size": 65536 00:26:35.360 } 00:26:35.360 ] 00:26:35.360 }' 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:35.618 "name": "raid_bdev1", 00:26:35.618 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:35.618 "strip_size_kb": 0, 00:26:35.618 "state": "online", 00:26:35.618 "raid_level": "raid1", 00:26:35.618 "superblock": false, 00:26:35.618 "num_base_bdevs": 2, 00:26:35.618 "num_base_bdevs_discovered": 2, 00:26:35.618 "num_base_bdevs_operational": 2, 00:26:35.618 "base_bdevs_list": [ 00:26:35.618 { 00:26:35.618 "name": "spare", 00:26:35.618 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:35.618 "is_configured": true, 00:26:35.618 "data_offset": 0, 00:26:35.618 "data_size": 65536 00:26:35.618 }, 00:26:35.618 { 00:26:35.618 "name": "BaseBdev2", 00:26:35.618 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:35.618 "is_configured": true, 00:26:35.618 "data_offset": 0, 00:26:35.618 "data_size": 65536 00:26:35.618 } 00:26:35.618 ] 00:26:35.618 }' 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:35.618 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.876 "name": "raid_bdev1", 00:26:35.876 "uuid": "be1710d1-d8a5-48c1-b7e6-2dbe9f2d464b", 00:26:35.876 "strip_size_kb": 0, 00:26:35.876 "state": "online", 00:26:35.876 "raid_level": "raid1", 00:26:35.876 "superblock": false, 00:26:35.876 "num_base_bdevs": 2, 00:26:35.876 "num_base_bdevs_discovered": 2, 00:26:35.876 "num_base_bdevs_operational": 2, 00:26:35.876 "base_bdevs_list": [ 00:26:35.876 { 00:26:35.876 "name": "spare", 00:26:35.876 "uuid": "6ae84c99-4bdc-5a7b-b090-d13626a69a4d", 00:26:35.876 "is_configured": true, 00:26:35.876 "data_offset": 0, 00:26:35.876 "data_size": 65536 00:26:35.876 }, 00:26:35.876 { 00:26:35.876 "name": "BaseBdev2", 00:26:35.876 "uuid": "a669b4cd-51ad-597a-a698-28f1c58d0404", 00:26:35.876 "is_configured": true, 00:26:35.876 "data_offset": 0, 00:26:35.876 "data_size": 65536 00:26:35.876 } 00:26:35.876 ] 00:26:35.876 }' 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.876 07:23:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.443 [2024-11-20 07:24:00.434046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.443 [2024-11-20 07:24:00.434262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:36.443 [2024-11-20 07:24:00.434480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.443 [2024-11-20 07:24:00.434708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:36.443 [2024-11-20 07:24:00.434894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:36.443 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:36.702 /dev/nbd0 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:36.702 1+0 records in 00:26:36.702 1+0 records out 00:26:36.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585954 s, 7.0 MB/s 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:36.702 07:24:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:36.961 /dev/nbd1 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:36.961 1+0 records in 00:26:36.961 1+0 records out 00:26:36.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333878 s, 12.3 MB/s 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:36.961 07:24:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.219 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.478 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75686 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75686 ']' 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75686 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:26:37.736 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.737 07:24:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75686 00:26:37.737 killing process with pid 75686 00:26:37.737 Received shutdown signal, test time was about 60.000000 seconds 00:26:37.737 00:26:37.737 Latency(us) 00:26:37.737 [2024-11-20T07:24:02.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.737 [2024-11-20T07:24:02.026Z] =================================================================================================================== 00:26:37.737 [2024-11-20T07:24:02.026Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:37.737 07:24:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.737 07:24:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.737 07:24:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75686' 00:26:37.737 07:24:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75686 00:26:37.737 [2024-11-20 07:24:02.006454] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:37.737 07:24:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75686 00:26:38.304 [2024-11-20 07:24:02.283701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:26:39.241 00:26:39.241 real 0m18.616s 00:26:39.241 user 0m21.197s 00:26:39.241 sys 0m3.580s 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.241 ************************************ 00:26:39.241 END TEST raid_rebuild_test 00:26:39.241 ************************************ 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.241 07:24:03 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:26:39.241 07:24:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:39.241 07:24:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.241 07:24:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:39.241 ************************************ 00:26:39.241 START TEST raid_rebuild_test_sb 00:26:39.241 ************************************ 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76138 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76138 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76138 ']' 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.241 07:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:39.241 [2024-11-20 07:24:03.498448] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:39.241 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:39.241 Zero copy mechanism will not be used. 00:26:39.241 [2024-11-20 07:24:03.498904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76138 ] 00:26:39.500 [2024-11-20 07:24:03.685488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.759 [2024-11-20 07:24:03.812970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.759 [2024-11-20 07:24:04.018664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.759 [2024-11-20 07:24:04.018720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.327 BaseBdev1_malloc 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.327 [2024-11-20 07:24:04.570530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:40.327 [2024-11-20 07:24:04.570770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.327 [2024-11-20 07:24:04.570847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:40.327 [2024-11-20 07:24:04.571054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.327 [2024-11-20 07:24:04.573927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.327 [2024-11-20 07:24:04.574097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:40.327 BaseBdev1 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.327 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 BaseBdev2_malloc 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 [2024-11-20 07:24:04.628316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:40.586 [2024-11-20 07:24:04.628529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.586 [2024-11-20 07:24:04.628570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:40.586 [2024-11-20 07:24:04.628605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.586 [2024-11-20 07:24:04.631418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.586 [2024-11-20 07:24:04.631467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:40.586 BaseBdev2 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 spare_malloc 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 spare_delay 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 [2024-11-20 07:24:04.698218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:40.586 [2024-11-20 07:24:04.698291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.586 [2024-11-20 07:24:04.698320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:40.586 [2024-11-20 07:24:04.698337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.586 [2024-11-20 07:24:04.701325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.586 [2024-11-20 07:24:04.701375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:40.586 spare 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 [2024-11-20 07:24:04.706308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:40.586 [2024-11-20 07:24:04.708835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:40.586 [2024-11-20 07:24:04.709226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:40.586 [2024-11-20 07:24:04.709258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:40.586 [2024-11-20 07:24:04.709606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:40.586 [2024-11-20 07:24:04.709868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:40.586 [2024-11-20 07:24:04.709885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:40.586 [2024-11-20 07:24:04.710113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.586 "name": "raid_bdev1", 00:26:40.586 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:40.586 "strip_size_kb": 0, 00:26:40.586 "state": "online", 00:26:40.586 "raid_level": "raid1", 00:26:40.586 "superblock": true, 00:26:40.586 "num_base_bdevs": 2, 00:26:40.586 "num_base_bdevs_discovered": 2, 00:26:40.586 "num_base_bdevs_operational": 2, 00:26:40.586 "base_bdevs_list": [ 00:26:40.586 { 00:26:40.586 "name": "BaseBdev1", 00:26:40.586 "uuid": "97d730ff-ff76-591b-9bd0-2aeecad88e89", 00:26:40.586 "is_configured": true, 00:26:40.586 "data_offset": 2048, 00:26:40.586 "data_size": 63488 00:26:40.586 }, 00:26:40.586 { 00:26:40.586 "name": "BaseBdev2", 00:26:40.586 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:40.586 "is_configured": true, 00:26:40.586 "data_offset": 2048, 00:26:40.586 "data_size": 63488 00:26:40.586 } 00:26:40.586 ] 00:26:40.586 }' 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.586 07:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.163 [2024-11-20 07:24:05.218838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:41.163 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:41.453 [2024-11-20 07:24:05.618644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:41.453 /dev/nbd0 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:41.453 1+0 records in 00:26:41.453 1+0 records out 00:26:41.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447209 s, 9.2 MB/s 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:26:41.453 07:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:48.019 63488+0 records in 00:26:48.019 63488+0 records out 00:26:48.019 32505856 bytes (33 MB, 31 MiB) copied, 6.06088 s, 5.4 MB/s 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.019 07:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:48.019 [2024-11-20 07:24:12.084159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.019 [2024-11-20 07:24:12.120282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:48.019 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:48.020 "name": "raid_bdev1", 00:26:48.020 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:48.020 "strip_size_kb": 0, 00:26:48.020 "state": "online", 00:26:48.020 "raid_level": "raid1", 00:26:48.020 "superblock": true, 00:26:48.020 "num_base_bdevs": 2, 00:26:48.020 "num_base_bdevs_discovered": 1, 00:26:48.020 "num_base_bdevs_operational": 1, 00:26:48.020 "base_bdevs_list": [ 00:26:48.020 { 00:26:48.020 "name": null, 00:26:48.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.020 "is_configured": false, 00:26:48.020 "data_offset": 0, 00:26:48.020 "data_size": 63488 00:26:48.020 }, 00:26:48.020 { 00:26:48.020 "name": "BaseBdev2", 00:26:48.020 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:48.020 "is_configured": true, 00:26:48.020 "data_offset": 2048, 00:26:48.020 "data_size": 63488 00:26:48.020 } 00:26:48.020 ] 00:26:48.020 }' 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:48.020 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.587 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:48.587 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.587 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.587 [2024-11-20 07:24:12.624474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:48.587 [2024-11-20 07:24:12.640712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:26:48.588 07:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.588 07:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:48.588 [2024-11-20 07:24:12.643170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:49.523 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:49.524 "name": "raid_bdev1", 00:26:49.524 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:49.524 "strip_size_kb": 0, 00:26:49.524 "state": "online", 00:26:49.524 "raid_level": "raid1", 00:26:49.524 "superblock": true, 00:26:49.524 "num_base_bdevs": 2, 00:26:49.524 "num_base_bdevs_discovered": 2, 00:26:49.524 "num_base_bdevs_operational": 2, 00:26:49.524 "process": { 00:26:49.524 "type": "rebuild", 00:26:49.524 "target": "spare", 00:26:49.524 "progress": { 00:26:49.524 "blocks": 20480, 00:26:49.524 "percent": 32 00:26:49.524 } 00:26:49.524 }, 00:26:49.524 "base_bdevs_list": [ 00:26:49.524 { 00:26:49.524 "name": "spare", 00:26:49.524 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:49.524 "is_configured": true, 00:26:49.524 "data_offset": 2048, 00:26:49.524 "data_size": 63488 00:26:49.524 }, 00:26:49.524 { 00:26:49.524 "name": "BaseBdev2", 00:26:49.524 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:49.524 "is_configured": true, 00:26:49.524 "data_offset": 2048, 00:26:49.524 "data_size": 63488 00:26:49.524 } 00:26:49.524 ] 00:26:49.524 }' 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.524 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.783 [2024-11-20 07:24:13.820333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.783 [2024-11-20 07:24:13.852183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:49.783 [2024-11-20 07:24:13.852446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.783 [2024-11-20 07:24:13.852474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.783 [2024-11-20 07:24:13.852495] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.783 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.783 "name": "raid_bdev1", 00:26:49.783 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:49.783 "strip_size_kb": 0, 00:26:49.784 "state": "online", 00:26:49.784 "raid_level": "raid1", 00:26:49.784 "superblock": true, 00:26:49.784 "num_base_bdevs": 2, 00:26:49.784 "num_base_bdevs_discovered": 1, 00:26:49.784 "num_base_bdevs_operational": 1, 00:26:49.784 "base_bdevs_list": [ 00:26:49.784 { 00:26:49.784 "name": null, 00:26:49.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.784 "is_configured": false, 00:26:49.784 "data_offset": 0, 00:26:49.784 "data_size": 63488 00:26:49.784 }, 00:26:49.784 { 00:26:49.784 "name": "BaseBdev2", 00:26:49.784 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:49.784 "is_configured": true, 00:26:49.784 "data_offset": 2048, 00:26:49.784 "data_size": 63488 00:26:49.784 } 00:26:49.784 ] 00:26:49.784 }' 00:26:49.784 07:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.784 07:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:50.351 "name": "raid_bdev1", 00:26:50.351 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:50.351 "strip_size_kb": 0, 00:26:50.351 "state": "online", 00:26:50.351 "raid_level": "raid1", 00:26:50.351 "superblock": true, 00:26:50.351 "num_base_bdevs": 2, 00:26:50.351 "num_base_bdevs_discovered": 1, 00:26:50.351 "num_base_bdevs_operational": 1, 00:26:50.351 "base_bdevs_list": [ 00:26:50.351 { 00:26:50.351 "name": null, 00:26:50.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.351 "is_configured": false, 00:26:50.351 "data_offset": 0, 00:26:50.351 "data_size": 63488 00:26:50.351 }, 00:26:50.351 { 00:26:50.351 "name": "BaseBdev2", 00:26:50.351 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:50.351 "is_configured": true, 00:26:50.351 "data_offset": 2048, 00:26:50.351 "data_size": 63488 00:26:50.351 } 00:26:50.351 ] 00:26:50.351 }' 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.351 [2024-11-20 07:24:14.596572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:50.351 [2024-11-20 07:24:14.612518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.351 07:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:50.351 [2024-11-20 07:24:14.615216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.729 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:51.729 "name": "raid_bdev1", 00:26:51.729 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:51.729 "strip_size_kb": 0, 00:26:51.729 "state": "online", 00:26:51.729 "raid_level": "raid1", 00:26:51.729 "superblock": true, 00:26:51.729 "num_base_bdevs": 2, 00:26:51.729 "num_base_bdevs_discovered": 2, 00:26:51.729 "num_base_bdevs_operational": 2, 00:26:51.729 "process": { 00:26:51.729 "type": "rebuild", 00:26:51.729 "target": "spare", 00:26:51.729 "progress": { 00:26:51.729 "blocks": 20480, 00:26:51.729 "percent": 32 00:26:51.729 } 00:26:51.730 }, 00:26:51.730 "base_bdevs_list": [ 00:26:51.730 { 00:26:51.730 "name": "spare", 00:26:51.730 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:51.730 "is_configured": true, 00:26:51.730 "data_offset": 2048, 00:26:51.730 "data_size": 63488 00:26:51.730 }, 00:26:51.730 { 00:26:51.730 "name": "BaseBdev2", 00:26:51.730 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:51.730 "is_configured": true, 00:26:51.730 "data_offset": 2048, 00:26:51.730 "data_size": 63488 00:26:51.730 } 00:26:51.730 ] 00:26:51.730 }' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:26:51.730 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:51.730 "name": "raid_bdev1", 00:26:51.730 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:51.730 "strip_size_kb": 0, 00:26:51.730 "state": "online", 00:26:51.730 "raid_level": "raid1", 00:26:51.730 "superblock": true, 00:26:51.730 "num_base_bdevs": 2, 00:26:51.730 "num_base_bdevs_discovered": 2, 00:26:51.730 "num_base_bdevs_operational": 2, 00:26:51.730 "process": { 00:26:51.730 "type": "rebuild", 00:26:51.730 "target": "spare", 00:26:51.730 "progress": { 00:26:51.730 "blocks": 22528, 00:26:51.730 "percent": 35 00:26:51.730 } 00:26:51.730 }, 00:26:51.730 "base_bdevs_list": [ 00:26:51.730 { 00:26:51.730 "name": "spare", 00:26:51.730 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:51.730 "is_configured": true, 00:26:51.730 "data_offset": 2048, 00:26:51.730 "data_size": 63488 00:26:51.730 }, 00:26:51.730 { 00:26:51.730 "name": "BaseBdev2", 00:26:51.730 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:51.730 "is_configured": true, 00:26:51.730 "data_offset": 2048, 00:26:51.730 "data_size": 63488 00:26:51.730 } 00:26:51.730 ] 00:26:51.730 }' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.730 07:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.667 07:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.927 07:24:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.927 07:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:52.927 "name": "raid_bdev1", 00:26:52.927 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:52.927 "strip_size_kb": 0, 00:26:52.927 "state": "online", 00:26:52.927 "raid_level": "raid1", 00:26:52.927 "superblock": true, 00:26:52.927 "num_base_bdevs": 2, 00:26:52.927 "num_base_bdevs_discovered": 2, 00:26:52.927 "num_base_bdevs_operational": 2, 00:26:52.927 "process": { 00:26:52.927 "type": "rebuild", 00:26:52.927 "target": "spare", 00:26:52.927 "progress": { 00:26:52.927 "blocks": 47104, 00:26:52.927 "percent": 74 00:26:52.927 } 00:26:52.927 }, 00:26:52.927 "base_bdevs_list": [ 00:26:52.927 { 00:26:52.927 "name": "spare", 00:26:52.927 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:52.927 "is_configured": true, 00:26:52.927 "data_offset": 2048, 00:26:52.927 "data_size": 63488 00:26:52.927 }, 00:26:52.927 { 00:26:52.927 "name": "BaseBdev2", 00:26:52.927 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:52.927 "is_configured": true, 00:26:52.927 "data_offset": 2048, 00:26:52.927 "data_size": 63488 00:26:52.927 } 00:26:52.927 ] 00:26:52.927 }' 00:26:52.927 07:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:52.927 07:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.927 07:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:52.927 07:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.927 07:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:53.501 [2024-11-20 07:24:17.737233] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:53.501 [2024-11-20 07:24:17.737335] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:53.501 [2024-11-20 07:24:17.737511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.071 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:54.071 "name": "raid_bdev1", 00:26:54.071 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:54.071 "strip_size_kb": 0, 00:26:54.071 "state": "online", 00:26:54.072 "raid_level": "raid1", 00:26:54.072 "superblock": true, 00:26:54.072 "num_base_bdevs": 2, 00:26:54.072 "num_base_bdevs_discovered": 2, 00:26:54.072 "num_base_bdevs_operational": 2, 00:26:54.072 "base_bdevs_list": [ 00:26:54.072 { 00:26:54.072 "name": "spare", 00:26:54.072 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:54.072 "is_configured": true, 00:26:54.072 "data_offset": 2048, 00:26:54.072 "data_size": 63488 00:26:54.072 }, 00:26:54.072 { 00:26:54.072 "name": "BaseBdev2", 00:26:54.072 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:54.072 "is_configured": true, 00:26:54.072 "data_offset": 2048, 00:26:54.072 "data_size": 63488 00:26:54.072 } 00:26:54.072 ] 00:26:54.072 }' 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:54.072 "name": "raid_bdev1", 00:26:54.072 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:54.072 "strip_size_kb": 0, 00:26:54.072 "state": "online", 00:26:54.072 "raid_level": "raid1", 00:26:54.072 "superblock": true, 00:26:54.072 "num_base_bdevs": 2, 00:26:54.072 "num_base_bdevs_discovered": 2, 00:26:54.072 "num_base_bdevs_operational": 2, 00:26:54.072 "base_bdevs_list": [ 00:26:54.072 { 00:26:54.072 "name": "spare", 00:26:54.072 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:54.072 "is_configured": true, 00:26:54.072 "data_offset": 2048, 00:26:54.072 "data_size": 63488 00:26:54.072 }, 00:26:54.072 { 00:26:54.072 "name": "BaseBdev2", 00:26:54.072 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:54.072 "is_configured": true, 00:26:54.072 "data_offset": 2048, 00:26:54.072 "data_size": 63488 00:26:54.072 } 00:26:54.072 ] 00:26:54.072 }' 00:26:54.072 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.329 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.330 "name": "raid_bdev1", 00:26:54.330 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:54.330 "strip_size_kb": 0, 00:26:54.330 "state": "online", 00:26:54.330 "raid_level": "raid1", 00:26:54.330 "superblock": true, 00:26:54.330 "num_base_bdevs": 2, 00:26:54.330 "num_base_bdevs_discovered": 2, 00:26:54.330 "num_base_bdevs_operational": 2, 00:26:54.330 "base_bdevs_list": [ 00:26:54.330 { 00:26:54.330 "name": "spare", 00:26:54.330 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:54.330 "is_configured": true, 00:26:54.330 "data_offset": 2048, 00:26:54.330 "data_size": 63488 00:26:54.330 }, 00:26:54.330 { 00:26:54.330 "name": "BaseBdev2", 00:26:54.330 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:54.330 "is_configured": true, 00:26:54.330 "data_offset": 2048, 00:26:54.330 "data_size": 63488 00:26:54.330 } 00:26:54.330 ] 00:26:54.330 }' 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.330 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.895 [2024-11-20 07:24:18.980440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:54.895 [2024-11-20 07:24:18.980659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:54.895 [2024-11-20 07:24:18.980873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.895 [2024-11-20 07:24:18.980995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.895 [2024-11-20 07:24:18.981013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.895 07:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.895 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:55.152 /dev/nbd0 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.152 1+0 records in 00:26:55.152 1+0 records out 00:26:55.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552561 s, 7.4 MB/s 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:55.152 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:55.411 /dev/nbd1 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.411 1+0 records in 00:26:55.411 1+0 records out 00:26:55.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463049 s, 8.8 MB/s 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:55.411 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.669 07:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.927 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.185 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.185 [2024-11-20 07:24:20.466944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:56.185 [2024-11-20 07:24:20.467013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.185 [2024-11-20 07:24:20.467049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:56.186 [2024-11-20 07:24:20.467064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.186 [2024-11-20 07:24:20.470085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.186 spare 00:26:56.186 [2024-11-20 07:24:20.470263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:56.186 [2024-11-20 07:24:20.470394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:56.186 [2024-11-20 07:24:20.470460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:56.186 [2024-11-20 07:24:20.470671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:56.186 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.186 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:56.186 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.186 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.444 [2024-11-20 07:24:20.570810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:56.444 [2024-11-20 07:24:20.570896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:56.444 [2024-11-20 07:24:20.571392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:26:56.444 [2024-11-20 07:24:20.571788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:56.444 [2024-11-20 07:24:20.571813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:56.444 [2024-11-20 07:24:20.572066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.444 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.445 "name": "raid_bdev1", 00:26:56.445 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:56.445 "strip_size_kb": 0, 00:26:56.445 "state": "online", 00:26:56.445 "raid_level": "raid1", 00:26:56.445 "superblock": true, 00:26:56.445 "num_base_bdevs": 2, 00:26:56.445 "num_base_bdevs_discovered": 2, 00:26:56.445 "num_base_bdevs_operational": 2, 00:26:56.445 "base_bdevs_list": [ 00:26:56.445 { 00:26:56.445 "name": "spare", 00:26:56.445 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:56.445 "is_configured": true, 00:26:56.445 "data_offset": 2048, 00:26:56.445 "data_size": 63488 00:26:56.445 }, 00:26:56.445 { 00:26:56.445 "name": "BaseBdev2", 00:26:56.445 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:56.445 "is_configured": true, 00:26:56.445 "data_offset": 2048, 00:26:56.445 "data_size": 63488 00:26:56.445 } 00:26:56.445 ] 00:26:56.445 }' 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.445 07:24:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.012 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:57.012 "name": "raid_bdev1", 00:26:57.012 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:57.012 "strip_size_kb": 0, 00:26:57.012 "state": "online", 00:26:57.012 "raid_level": "raid1", 00:26:57.012 "superblock": true, 00:26:57.012 "num_base_bdevs": 2, 00:26:57.012 "num_base_bdevs_discovered": 2, 00:26:57.012 "num_base_bdevs_operational": 2, 00:26:57.012 "base_bdevs_list": [ 00:26:57.012 { 00:26:57.012 "name": "spare", 00:26:57.012 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:57.012 "is_configured": true, 00:26:57.012 "data_offset": 2048, 00:26:57.012 "data_size": 63488 00:26:57.012 }, 00:26:57.012 { 00:26:57.012 "name": "BaseBdev2", 00:26:57.012 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:57.012 "is_configured": true, 00:26:57.012 "data_offset": 2048, 00:26:57.012 "data_size": 63488 00:26:57.012 } 00:26:57.012 ] 00:26:57.012 }' 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.013 [2024-11-20 07:24:21.296240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:57.013 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.273 "name": "raid_bdev1", 00:26:57.273 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:57.273 "strip_size_kb": 0, 00:26:57.273 "state": "online", 00:26:57.273 "raid_level": "raid1", 00:26:57.273 "superblock": true, 00:26:57.273 "num_base_bdevs": 2, 00:26:57.273 "num_base_bdevs_discovered": 1, 00:26:57.273 "num_base_bdevs_operational": 1, 00:26:57.273 "base_bdevs_list": [ 00:26:57.273 { 00:26:57.273 "name": null, 00:26:57.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.273 "is_configured": false, 00:26:57.273 "data_offset": 0, 00:26:57.273 "data_size": 63488 00:26:57.273 }, 00:26:57.273 { 00:26:57.273 "name": "BaseBdev2", 00:26:57.273 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:57.273 "is_configured": true, 00:26:57.273 "data_offset": 2048, 00:26:57.273 "data_size": 63488 00:26:57.273 } 00:26:57.273 ] 00:26:57.273 }' 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.273 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.531 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:57.531 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.531 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.790 [2024-11-20 07:24:21.820432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:57.790 [2024-11-20 07:24:21.820698] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:57.790 [2024-11-20 07:24:21.820727] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:57.790 [2024-11-20 07:24:21.820778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:57.790 [2024-11-20 07:24:21.836005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:26:57.790 07:24:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.790 07:24:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:57.790 [2024-11-20 07:24:21.838479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:58.727 "name": "raid_bdev1", 00:26:58.727 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:58.727 "strip_size_kb": 0, 00:26:58.727 "state": "online", 00:26:58.727 "raid_level": "raid1", 00:26:58.727 "superblock": true, 00:26:58.727 "num_base_bdevs": 2, 00:26:58.727 "num_base_bdevs_discovered": 2, 00:26:58.727 "num_base_bdevs_operational": 2, 00:26:58.727 "process": { 00:26:58.727 "type": "rebuild", 00:26:58.727 "target": "spare", 00:26:58.727 "progress": { 00:26:58.727 "blocks": 20480, 00:26:58.727 "percent": 32 00:26:58.727 } 00:26:58.727 }, 00:26:58.727 "base_bdevs_list": [ 00:26:58.727 { 00:26:58.727 "name": "spare", 00:26:58.727 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:26:58.727 "is_configured": true, 00:26:58.727 "data_offset": 2048, 00:26:58.727 "data_size": 63488 00:26:58.727 }, 00:26:58.727 { 00:26:58.727 "name": "BaseBdev2", 00:26:58.727 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:58.727 "is_configured": true, 00:26:58.727 "data_offset": 2048, 00:26:58.727 "data_size": 63488 00:26:58.727 } 00:26:58.727 ] 00:26:58.727 }' 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.727 07:24:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.727 [2024-11-20 07:24:23.000123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:58.987 [2024-11-20 07:24:23.047321] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:58.987 [2024-11-20 07:24:23.047696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.987 [2024-11-20 07:24:23.047726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:58.987 [2024-11-20 07:24:23.047742] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.987 "name": "raid_bdev1", 00:26:58.987 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:26:58.987 "strip_size_kb": 0, 00:26:58.987 "state": "online", 00:26:58.987 "raid_level": "raid1", 00:26:58.987 "superblock": true, 00:26:58.987 "num_base_bdevs": 2, 00:26:58.987 "num_base_bdevs_discovered": 1, 00:26:58.987 "num_base_bdevs_operational": 1, 00:26:58.987 "base_bdevs_list": [ 00:26:58.987 { 00:26:58.987 "name": null, 00:26:58.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.987 "is_configured": false, 00:26:58.987 "data_offset": 0, 00:26:58.987 "data_size": 63488 00:26:58.987 }, 00:26:58.987 { 00:26:58.987 "name": "BaseBdev2", 00:26:58.987 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:26:58.987 "is_configured": true, 00:26:58.987 "data_offset": 2048, 00:26:58.987 "data_size": 63488 00:26:58.987 } 00:26:58.987 ] 00:26:58.987 }' 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.987 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.553 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:59.553 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.553 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.553 [2024-11-20 07:24:23.604018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:59.553 [2024-11-20 07:24:23.604276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.553 [2024-11-20 07:24:23.604318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:59.553 [2024-11-20 07:24:23.604354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.553 [2024-11-20 07:24:23.605029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.553 [2024-11-20 07:24:23.605070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:59.553 [2024-11-20 07:24:23.605187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:59.553 [2024-11-20 07:24:23.605228] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:59.553 [2024-11-20 07:24:23.605241] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:59.553 [2024-11-20 07:24:23.605277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:59.553 [2024-11-20 07:24:23.621396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:26:59.553 spare 00:26:59.553 07:24:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.553 07:24:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:59.553 [2024-11-20 07:24:23.624002] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.540 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:00.540 "name": "raid_bdev1", 00:27:00.541 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:00.541 "strip_size_kb": 0, 00:27:00.541 "state": "online", 00:27:00.541 "raid_level": "raid1", 00:27:00.541 "superblock": true, 00:27:00.541 "num_base_bdevs": 2, 00:27:00.541 "num_base_bdevs_discovered": 2, 00:27:00.541 "num_base_bdevs_operational": 2, 00:27:00.541 "process": { 00:27:00.541 "type": "rebuild", 00:27:00.541 "target": "spare", 00:27:00.541 "progress": { 00:27:00.541 "blocks": 20480, 00:27:00.541 "percent": 32 00:27:00.541 } 00:27:00.541 }, 00:27:00.541 "base_bdevs_list": [ 00:27:00.541 { 00:27:00.541 "name": "spare", 00:27:00.541 "uuid": "1f73d954-f36f-5a96-a07e-a6f250379e30", 00:27:00.541 "is_configured": true, 00:27:00.541 "data_offset": 2048, 00:27:00.541 "data_size": 63488 00:27:00.541 }, 00:27:00.541 { 00:27:00.541 "name": "BaseBdev2", 00:27:00.541 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:00.541 "is_configured": true, 00:27:00.541 "data_offset": 2048, 00:27:00.541 "data_size": 63488 00:27:00.541 } 00:27:00.541 ] 00:27:00.541 }' 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.541 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.541 [2024-11-20 07:24:24.797481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:00.800 [2024-11-20 07:24:24.833379] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:00.800 [2024-11-20 07:24:24.833637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.800 [2024-11-20 07:24:24.833671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:00.800 [2024-11-20 07:24:24.833684] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.800 "name": "raid_bdev1", 00:27:00.800 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:00.800 "strip_size_kb": 0, 00:27:00.800 "state": "online", 00:27:00.800 "raid_level": "raid1", 00:27:00.800 "superblock": true, 00:27:00.800 "num_base_bdevs": 2, 00:27:00.800 "num_base_bdevs_discovered": 1, 00:27:00.800 "num_base_bdevs_operational": 1, 00:27:00.800 "base_bdevs_list": [ 00:27:00.800 { 00:27:00.800 "name": null, 00:27:00.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.800 "is_configured": false, 00:27:00.800 "data_offset": 0, 00:27:00.800 "data_size": 63488 00:27:00.800 }, 00:27:00.800 { 00:27:00.800 "name": "BaseBdev2", 00:27:00.800 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:00.800 "is_configured": true, 00:27:00.800 "data_offset": 2048, 00:27:00.800 "data_size": 63488 00:27:00.800 } 00:27:00.800 ] 00:27:00.800 }' 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.800 07:24:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.366 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:01.366 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:01.366 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:01.367 "name": "raid_bdev1", 00:27:01.367 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:01.367 "strip_size_kb": 0, 00:27:01.367 "state": "online", 00:27:01.367 "raid_level": "raid1", 00:27:01.367 "superblock": true, 00:27:01.367 "num_base_bdevs": 2, 00:27:01.367 "num_base_bdevs_discovered": 1, 00:27:01.367 "num_base_bdevs_operational": 1, 00:27:01.367 "base_bdevs_list": [ 00:27:01.367 { 00:27:01.367 "name": null, 00:27:01.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.367 "is_configured": false, 00:27:01.367 "data_offset": 0, 00:27:01.367 "data_size": 63488 00:27:01.367 }, 00:27:01.367 { 00:27:01.367 "name": "BaseBdev2", 00:27:01.367 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:01.367 "is_configured": true, 00:27:01.367 "data_offset": 2048, 00:27:01.367 "data_size": 63488 00:27:01.367 } 00:27:01.367 ] 00:27:01.367 }' 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.367 [2024-11-20 07:24:25.580690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:01.367 [2024-11-20 07:24:25.580990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.367 [2024-11-20 07:24:25.581035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:01.367 [2024-11-20 07:24:25.581061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.367 [2024-11-20 07:24:25.581751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.367 [2024-11-20 07:24:25.581777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:01.367 [2024-11-20 07:24:25.581881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:01.367 [2024-11-20 07:24:25.581902] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:01.367 [2024-11-20 07:24:25.581916] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:01.367 [2024-11-20 07:24:25.581929] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:01.367 BaseBdev1 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.367 07:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:02.303 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.561 "name": "raid_bdev1", 00:27:02.561 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:02.561 "strip_size_kb": 0, 00:27:02.561 "state": "online", 00:27:02.561 "raid_level": "raid1", 00:27:02.561 "superblock": true, 00:27:02.561 "num_base_bdevs": 2, 00:27:02.561 "num_base_bdevs_discovered": 1, 00:27:02.561 "num_base_bdevs_operational": 1, 00:27:02.561 "base_bdevs_list": [ 00:27:02.561 { 00:27:02.561 "name": null, 00:27:02.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.561 "is_configured": false, 00:27:02.561 "data_offset": 0, 00:27:02.561 "data_size": 63488 00:27:02.561 }, 00:27:02.561 { 00:27:02.561 "name": "BaseBdev2", 00:27:02.561 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:02.561 "is_configured": true, 00:27:02.561 "data_offset": 2048, 00:27:02.561 "data_size": 63488 00:27:02.561 } 00:27:02.561 ] 00:27:02.561 }' 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.561 07:24:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:03.127 "name": "raid_bdev1", 00:27:03.127 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:03.127 "strip_size_kb": 0, 00:27:03.127 "state": "online", 00:27:03.127 "raid_level": "raid1", 00:27:03.127 "superblock": true, 00:27:03.127 "num_base_bdevs": 2, 00:27:03.127 "num_base_bdevs_discovered": 1, 00:27:03.127 "num_base_bdevs_operational": 1, 00:27:03.127 "base_bdevs_list": [ 00:27:03.127 { 00:27:03.127 "name": null, 00:27:03.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.127 "is_configured": false, 00:27:03.127 "data_offset": 0, 00:27:03.127 "data_size": 63488 00:27:03.127 }, 00:27:03.127 { 00:27:03.127 "name": "BaseBdev2", 00:27:03.127 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:03.127 "is_configured": true, 00:27:03.127 "data_offset": 2048, 00:27:03.127 "data_size": 63488 00:27:03.127 } 00:27:03.127 ] 00:27:03.127 }' 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.127 [2024-11-20 07:24:27.249286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:03.127 [2024-11-20 07:24:27.249479] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:03.127 [2024-11-20 07:24:27.249504] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:03.127 request: 00:27:03.127 { 00:27:03.127 "base_bdev": "BaseBdev1", 00:27:03.127 "raid_bdev": "raid_bdev1", 00:27:03.127 "method": "bdev_raid_add_base_bdev", 00:27:03.127 "req_id": 1 00:27:03.127 } 00:27:03.127 Got JSON-RPC error response 00:27:03.127 response: 00:27:03.127 { 00:27:03.127 "code": -22, 00:27:03.127 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:03.127 } 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:03.127 07:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.060 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.060 "name": "raid_bdev1", 00:27:04.060 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:04.060 "strip_size_kb": 0, 00:27:04.061 "state": "online", 00:27:04.061 "raid_level": "raid1", 00:27:04.061 "superblock": true, 00:27:04.061 "num_base_bdevs": 2, 00:27:04.061 "num_base_bdevs_discovered": 1, 00:27:04.061 "num_base_bdevs_operational": 1, 00:27:04.061 "base_bdevs_list": [ 00:27:04.061 { 00:27:04.061 "name": null, 00:27:04.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.061 "is_configured": false, 00:27:04.061 "data_offset": 0, 00:27:04.061 "data_size": 63488 00:27:04.061 }, 00:27:04.061 { 00:27:04.061 "name": "BaseBdev2", 00:27:04.061 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:04.061 "is_configured": true, 00:27:04.061 "data_offset": 2048, 00:27:04.061 "data_size": 63488 00:27:04.061 } 00:27:04.061 ] 00:27:04.061 }' 00:27:04.061 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.061 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:04.626 "name": "raid_bdev1", 00:27:04.626 "uuid": "4f39eb48-477c-441c-8c3a-6b61ebc1e8a4", 00:27:04.626 "strip_size_kb": 0, 00:27:04.626 "state": "online", 00:27:04.626 "raid_level": "raid1", 00:27:04.626 "superblock": true, 00:27:04.626 "num_base_bdevs": 2, 00:27:04.626 "num_base_bdevs_discovered": 1, 00:27:04.626 "num_base_bdevs_operational": 1, 00:27:04.626 "base_bdevs_list": [ 00:27:04.626 { 00:27:04.626 "name": null, 00:27:04.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.626 "is_configured": false, 00:27:04.626 "data_offset": 0, 00:27:04.626 "data_size": 63488 00:27:04.626 }, 00:27:04.626 { 00:27:04.626 "name": "BaseBdev2", 00:27:04.626 "uuid": "83e68faa-7b9b-5179-9919-cc8ff3d39f62", 00:27:04.626 "is_configured": true, 00:27:04.626 "data_offset": 2048, 00:27:04.626 "data_size": 63488 00:27:04.626 } 00:27:04.626 ] 00:27:04.626 }' 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:04.626 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76138 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76138 ']' 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76138 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76138 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:04.994 killing process with pid 76138 00:27:04.994 Received shutdown signal, test time was about 60.000000 seconds 00:27:04.994 00:27:04.994 Latency(us) 00:27:04.994 [2024-11-20T07:24:29.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.994 [2024-11-20T07:24:29.283Z] =================================================================================================================== 00:27:04.994 [2024-11-20T07:24:29.283Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76138' 00:27:04.994 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76138 00:27:04.995 [2024-11-20 07:24:28.992665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:04.995 07:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76138 00:27:04.995 [2024-11-20 07:24:28.992841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.995 [2024-11-20 07:24:28.992911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.995 [2024-11-20 07:24:28.992930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:04.995 [2024-11-20 07:24:29.262702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:27:06.376 00:27:06.376 real 0m26.940s 00:27:06.376 user 0m33.291s 00:27:06.376 sys 0m3.918s 00:27:06.376 ************************************ 00:27:06.376 END TEST raid_rebuild_test_sb 00:27:06.376 ************************************ 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.376 07:24:30 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:27:06.376 07:24:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:06.376 07:24:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.376 07:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:06.376 ************************************ 00:27:06.376 START TEST raid_rebuild_test_io 00:27:06.376 ************************************ 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76902 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76902 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76902 ']' 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.376 07:24:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:06.376 [2024-11-20 07:24:30.492357] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:06.376 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:06.376 Zero copy mechanism will not be used. 00:27:06.376 [2024-11-20 07:24:30.492561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76902 ] 00:27:06.636 [2024-11-20 07:24:30.692996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.636 [2024-11-20 07:24:30.831997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.894 [2024-11-20 07:24:31.038351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.894 [2024-11-20 07:24:31.038427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 BaseBdev1_malloc 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 [2024-11-20 07:24:31.567148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:07.462 [2024-11-20 07:24:31.567264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.462 [2024-11-20 07:24:31.567298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:07.462 [2024-11-20 07:24:31.567317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.462 [2024-11-20 07:24:31.570150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.462 [2024-11-20 07:24:31.570196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:07.462 BaseBdev1 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 BaseBdev2_malloc 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 [2024-11-20 07:24:31.619846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:07.462 [2024-11-20 07:24:31.619916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.462 [2024-11-20 07:24:31.619946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:07.462 [2024-11-20 07:24:31.619982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.462 [2024-11-20 07:24:31.622781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.462 [2024-11-20 07:24:31.622826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:07.462 BaseBdev2 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 spare_malloc 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 spare_delay 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 [2024-11-20 07:24:31.687669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:07.462 [2024-11-20 07:24:31.687747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.462 [2024-11-20 07:24:31.687794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:07.462 [2024-11-20 07:24:31.687812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.462 [2024-11-20 07:24:31.690968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.462 [2024-11-20 07:24:31.691023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:07.462 spare 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 [2024-11-20 07:24:31.695768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.462 [2024-11-20 07:24:31.698287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:07.462 [2024-11-20 07:24:31.698424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:07.462 [2024-11-20 07:24:31.698446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:07.462 [2024-11-20 07:24:31.698838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:07.462 [2024-11-20 07:24:31.699092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:07.462 [2024-11-20 07:24:31.699114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:07.462 [2024-11-20 07:24:31.699353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.462 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.721 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.721 "name": "raid_bdev1", 00:27:07.721 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:07.721 "strip_size_kb": 0, 00:27:07.721 "state": "online", 00:27:07.721 "raid_level": "raid1", 00:27:07.721 "superblock": false, 00:27:07.721 "num_base_bdevs": 2, 00:27:07.721 "num_base_bdevs_discovered": 2, 00:27:07.721 "num_base_bdevs_operational": 2, 00:27:07.721 "base_bdevs_list": [ 00:27:07.721 { 00:27:07.721 "name": "BaseBdev1", 00:27:07.721 "uuid": "7d6577b8-e502-584e-8aa6-10a922354626", 00:27:07.721 "is_configured": true, 00:27:07.721 "data_offset": 0, 00:27:07.721 "data_size": 65536 00:27:07.721 }, 00:27:07.721 { 00:27:07.721 "name": "BaseBdev2", 00:27:07.721 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:07.721 "is_configured": true, 00:27:07.721 "data_offset": 0, 00:27:07.721 "data_size": 65536 00:27:07.721 } 00:27:07.721 ] 00:27:07.721 }' 00:27:07.721 07:24:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.721 07:24:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.979 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:07.980 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:07.980 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.980 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.980 [2024-11-20 07:24:32.256301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.239 [2024-11-20 07:24:32.363925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:08.239 "name": "raid_bdev1", 00:27:08.239 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:08.239 "strip_size_kb": 0, 00:27:08.239 "state": "online", 00:27:08.239 "raid_level": "raid1", 00:27:08.239 "superblock": false, 00:27:08.239 "num_base_bdevs": 2, 00:27:08.239 "num_base_bdevs_discovered": 1, 00:27:08.239 "num_base_bdevs_operational": 1, 00:27:08.239 "base_bdevs_list": [ 00:27:08.239 { 00:27:08.239 "name": null, 00:27:08.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.239 "is_configured": false, 00:27:08.239 "data_offset": 0, 00:27:08.239 "data_size": 65536 00:27:08.239 }, 00:27:08.239 { 00:27:08.239 "name": "BaseBdev2", 00:27:08.239 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:08.239 "is_configured": true, 00:27:08.239 "data_offset": 0, 00:27:08.239 "data_size": 65536 00:27:08.239 } 00:27:08.239 ] 00:27:08.239 }' 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:08.239 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.239 [2024-11-20 07:24:32.500208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:08.239 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:08.239 Zero copy mechanism will not be used. 00:27:08.239 Running I/O for 60 seconds... 00:27:08.806 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:08.806 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.806 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.806 [2024-11-20 07:24:32.897310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:08.806 07:24:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.806 07:24:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:08.806 [2024-11-20 07:24:32.950910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:08.806 [2024-11-20 07:24:32.953613] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:08.806 [2024-11-20 07:24:33.077681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:08.807 [2024-11-20 07:24:33.078346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:09.065 [2024-11-20 07:24:33.297794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:09.065 [2024-11-20 07:24:33.298435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:09.583 173.00 IOPS, 519.00 MiB/s [2024-11-20T07:24:33.872Z] [2024-11-20 07:24:33.769769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:09.880 "name": "raid_bdev1", 00:27:09.880 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:09.880 "strip_size_kb": 0, 00:27:09.880 "state": "online", 00:27:09.880 "raid_level": "raid1", 00:27:09.880 "superblock": false, 00:27:09.880 "num_base_bdevs": 2, 00:27:09.880 "num_base_bdevs_discovered": 2, 00:27:09.880 "num_base_bdevs_operational": 2, 00:27:09.880 "process": { 00:27:09.880 "type": "rebuild", 00:27:09.880 "target": "spare", 00:27:09.880 "progress": { 00:27:09.880 "blocks": 12288, 00:27:09.880 "percent": 18 00:27:09.880 } 00:27:09.880 }, 00:27:09.880 "base_bdevs_list": [ 00:27:09.880 { 00:27:09.880 "name": "spare", 00:27:09.880 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:09.880 "is_configured": true, 00:27:09.880 "data_offset": 0, 00:27:09.880 "data_size": 65536 00:27:09.880 }, 00:27:09.880 { 00:27:09.880 "name": "BaseBdev2", 00:27:09.880 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:09.880 "is_configured": true, 00:27:09.880 "data_offset": 0, 00:27:09.880 "data_size": 65536 00:27:09.880 } 00:27:09.880 ] 00:27:09.880 }' 00:27:09.880 07:24:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:09.880 [2024-11-20 07:24:34.017781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:09.880 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.880 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:09.880 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.880 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:09.880 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.880 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 [2024-11-20 07:24:34.109929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:10.139 [2024-11-20 07:24:34.144204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:10.139 [2024-11-20 07:24:34.254398] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:10.139 [2024-11-20 07:24:34.265060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.139 [2024-11-20 07:24:34.265298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:10.139 [2024-11-20 07:24:34.265326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:10.139 [2024-11-20 07:24:34.299859] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.139 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.139 "name": "raid_bdev1", 00:27:10.139 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:10.139 "strip_size_kb": 0, 00:27:10.139 "state": "online", 00:27:10.139 "raid_level": "raid1", 00:27:10.139 "superblock": false, 00:27:10.139 "num_base_bdevs": 2, 00:27:10.139 "num_base_bdevs_discovered": 1, 00:27:10.140 "num_base_bdevs_operational": 1, 00:27:10.140 "base_bdevs_list": [ 00:27:10.140 { 00:27:10.140 "name": null, 00:27:10.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.140 "is_configured": false, 00:27:10.140 "data_offset": 0, 00:27:10.140 "data_size": 65536 00:27:10.140 }, 00:27:10.140 { 00:27:10.140 "name": "BaseBdev2", 00:27:10.140 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:10.140 "is_configured": true, 00:27:10.140 "data_offset": 0, 00:27:10.140 "data_size": 65536 00:27:10.140 } 00:27:10.140 ] 00:27:10.140 }' 00:27:10.140 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.140 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.658 130.00 IOPS, 390.00 MiB/s [2024-11-20T07:24:34.947Z] 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:10.658 "name": "raid_bdev1", 00:27:10.658 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:10.658 "strip_size_kb": 0, 00:27:10.658 "state": "online", 00:27:10.658 "raid_level": "raid1", 00:27:10.658 "superblock": false, 00:27:10.658 "num_base_bdevs": 2, 00:27:10.658 "num_base_bdevs_discovered": 1, 00:27:10.658 "num_base_bdevs_operational": 1, 00:27:10.658 "base_bdevs_list": [ 00:27:10.658 { 00:27:10.658 "name": null, 00:27:10.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.658 "is_configured": false, 00:27:10.658 "data_offset": 0, 00:27:10.658 "data_size": 65536 00:27:10.658 }, 00:27:10.658 { 00:27:10.658 "name": "BaseBdev2", 00:27:10.658 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:10.658 "is_configured": true, 00:27:10.658 "data_offset": 0, 00:27:10.658 "data_size": 65536 00:27:10.658 } 00:27:10.658 ] 00:27:10.658 }' 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:10.658 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:10.917 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:10.917 07:24:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:10.917 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.917 07:24:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.917 [2024-11-20 07:24:35.004045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:10.917 07:24:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.917 07:24:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:10.917 [2024-11-20 07:24:35.049400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:10.917 [2024-11-20 07:24:35.051945] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:10.917 [2024-11-20 07:24:35.162006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:10.917 [2024-11-20 07:24:35.162670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:11.177 [2024-11-20 07:24:35.388872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:11.177 [2024-11-20 07:24:35.389456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:12.003 141.67 IOPS, 425.00 MiB/s [2024-11-20T07:24:36.292Z] 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:12.003 "name": "raid_bdev1", 00:27:12.003 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:12.003 "strip_size_kb": 0, 00:27:12.003 "state": "online", 00:27:12.003 "raid_level": "raid1", 00:27:12.003 "superblock": false, 00:27:12.003 "num_base_bdevs": 2, 00:27:12.003 "num_base_bdevs_discovered": 2, 00:27:12.003 "num_base_bdevs_operational": 2, 00:27:12.003 "process": { 00:27:12.003 "type": "rebuild", 00:27:12.003 "target": "spare", 00:27:12.003 "progress": { 00:27:12.003 "blocks": 14336, 00:27:12.003 "percent": 21 00:27:12.003 } 00:27:12.003 }, 00:27:12.003 "base_bdevs_list": [ 00:27:12.003 { 00:27:12.003 "name": "spare", 00:27:12.003 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:12.003 "is_configured": true, 00:27:12.003 "data_offset": 0, 00:27:12.003 "data_size": 65536 00:27:12.003 }, 00:27:12.003 { 00:27:12.003 "name": "BaseBdev2", 00:27:12.003 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:12.003 "is_configured": true, 00:27:12.003 "data_offset": 0, 00:27:12.003 "data_size": 65536 00:27:12.003 } 00:27:12.003 ] 00:27:12.003 }' 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:12.003 [2024-11-20 07:24:36.144412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:12.003 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=440 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:12.004 "name": "raid_bdev1", 00:27:12.004 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:12.004 "strip_size_kb": 0, 00:27:12.004 "state": "online", 00:27:12.004 "raid_level": "raid1", 00:27:12.004 "superblock": false, 00:27:12.004 "num_base_bdevs": 2, 00:27:12.004 "num_base_bdevs_discovered": 2, 00:27:12.004 "num_base_bdevs_operational": 2, 00:27:12.004 "process": { 00:27:12.004 "type": "rebuild", 00:27:12.004 "target": "spare", 00:27:12.004 "progress": { 00:27:12.004 "blocks": 16384, 00:27:12.004 "percent": 25 00:27:12.004 } 00:27:12.004 }, 00:27:12.004 "base_bdevs_list": [ 00:27:12.004 { 00:27:12.004 "name": "spare", 00:27:12.004 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:12.004 "is_configured": true, 00:27:12.004 "data_offset": 0, 00:27:12.004 "data_size": 65536 00:27:12.004 }, 00:27:12.004 { 00:27:12.004 "name": "BaseBdev2", 00:27:12.004 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:12.004 "is_configured": true, 00:27:12.004 "data_offset": 0, 00:27:12.004 "data_size": 65536 00:27:12.004 } 00:27:12.004 ] 00:27:12.004 }' 00:27:12.004 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:12.262 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:12.262 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:12.262 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:12.262 07:24:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:12.262 [2024-11-20 07:24:36.477833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:12.829 130.50 IOPS, 391.50 MiB/s [2024-11-20T07:24:37.118Z] [2024-11-20 07:24:36.814074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:12.829 [2024-11-20 07:24:36.814720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:12.829 [2024-11-20 07:24:37.033739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:13.396 [2024-11-20 07:24:37.378926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:13.396 "name": "raid_bdev1", 00:27:13.396 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:13.396 "strip_size_kb": 0, 00:27:13.396 "state": "online", 00:27:13.396 "raid_level": "raid1", 00:27:13.396 "superblock": false, 00:27:13.396 "num_base_bdevs": 2, 00:27:13.396 "num_base_bdevs_discovered": 2, 00:27:13.396 "num_base_bdevs_operational": 2, 00:27:13.396 "process": { 00:27:13.396 "type": "rebuild", 00:27:13.396 "target": "spare", 00:27:13.396 "progress": { 00:27:13.396 "blocks": 34816, 00:27:13.396 "percent": 53 00:27:13.396 } 00:27:13.396 }, 00:27:13.396 "base_bdevs_list": [ 00:27:13.396 { 00:27:13.396 "name": "spare", 00:27:13.396 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:13.396 "is_configured": true, 00:27:13.396 "data_offset": 0, 00:27:13.396 "data_size": 65536 00:27:13.396 }, 00:27:13.396 { 00:27:13.396 "name": "BaseBdev2", 00:27:13.396 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:13.396 "is_configured": true, 00:27:13.396 "data_offset": 0, 00:27:13.396 "data_size": 65536 00:27:13.396 } 00:27:13.396 ] 00:27:13.396 }' 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:13.396 112.80 IOPS, 338.40 MiB/s [2024-11-20T07:24:37.685Z] 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.396 07:24:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:13.396 [2024-11-20 07:24:37.616849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:13.654 [2024-11-20 07:24:37.723786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:13.654 [2024-11-20 07:24:37.724537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:13.913 [2024-11-20 07:24:38.053490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:14.481 99.83 IOPS, 299.50 MiB/s [2024-11-20T07:24:38.770Z] 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:14.481 "name": "raid_bdev1", 00:27:14.481 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:14.481 "strip_size_kb": 0, 00:27:14.481 "state": "online", 00:27:14.481 "raid_level": "raid1", 00:27:14.481 "superblock": false, 00:27:14.481 "num_base_bdevs": 2, 00:27:14.481 "num_base_bdevs_discovered": 2, 00:27:14.481 "num_base_bdevs_operational": 2, 00:27:14.481 "process": { 00:27:14.481 "type": "rebuild", 00:27:14.481 "target": "spare", 00:27:14.481 "progress": { 00:27:14.481 "blocks": 51200, 00:27:14.481 "percent": 78 00:27:14.481 } 00:27:14.481 }, 00:27:14.481 "base_bdevs_list": [ 00:27:14.481 { 00:27:14.481 "name": "spare", 00:27:14.481 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:14.481 "is_configured": true, 00:27:14.481 "data_offset": 0, 00:27:14.481 "data_size": 65536 00:27:14.481 }, 00:27:14.481 { 00:27:14.481 "name": "BaseBdev2", 00:27:14.481 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:14.481 "is_configured": true, 00:27:14.481 "data_offset": 0, 00:27:14.481 "data_size": 65536 00:27:14.481 } 00:27:14.481 ] 00:27:14.481 }' 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:14.481 07:24:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:15.060 [2024-11-20 07:24:39.266761] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:15.353 [2024-11-20 07:24:39.364472] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:15.353 [2024-11-20 07:24:39.366733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.613 89.86 IOPS, 269.57 MiB/s [2024-11-20T07:24:39.902Z] 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:15.613 "name": "raid_bdev1", 00:27:15.613 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:15.613 "strip_size_kb": 0, 00:27:15.613 "state": "online", 00:27:15.613 "raid_level": "raid1", 00:27:15.613 "superblock": false, 00:27:15.613 "num_base_bdevs": 2, 00:27:15.613 "num_base_bdevs_discovered": 2, 00:27:15.613 "num_base_bdevs_operational": 2, 00:27:15.613 "base_bdevs_list": [ 00:27:15.613 { 00:27:15.613 "name": "spare", 00:27:15.613 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:15.613 "is_configured": true, 00:27:15.613 "data_offset": 0, 00:27:15.613 "data_size": 65536 00:27:15.613 }, 00:27:15.613 { 00:27:15.613 "name": "BaseBdev2", 00:27:15.613 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:15.613 "is_configured": true, 00:27:15.613 "data_offset": 0, 00:27:15.613 "data_size": 65536 00:27:15.613 } 00:27:15.613 ] 00:27:15.613 }' 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:15.613 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.614 07:24:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:15.873 07:24:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.873 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:15.873 "name": "raid_bdev1", 00:27:15.873 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:15.873 "strip_size_kb": 0, 00:27:15.873 "state": "online", 00:27:15.873 "raid_level": "raid1", 00:27:15.873 "superblock": false, 00:27:15.873 "num_base_bdevs": 2, 00:27:15.873 "num_base_bdevs_discovered": 2, 00:27:15.873 "num_base_bdevs_operational": 2, 00:27:15.873 "base_bdevs_list": [ 00:27:15.873 { 00:27:15.873 "name": "spare", 00:27:15.873 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:15.873 "is_configured": true, 00:27:15.873 "data_offset": 0, 00:27:15.873 "data_size": 65536 00:27:15.873 }, 00:27:15.873 { 00:27:15.873 "name": "BaseBdev2", 00:27:15.873 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:15.873 "is_configured": true, 00:27:15.873 "data_offset": 0, 00:27:15.873 "data_size": 65536 00:27:15.873 } 00:27:15.873 ] 00:27:15.873 }' 00:27:15.873 07:24:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.873 "name": "raid_bdev1", 00:27:15.873 "uuid": "89b537a5-40d3-4440-9631-f18bf75df780", 00:27:15.873 "strip_size_kb": 0, 00:27:15.873 "state": "online", 00:27:15.873 "raid_level": "raid1", 00:27:15.873 "superblock": false, 00:27:15.873 "num_base_bdevs": 2, 00:27:15.873 "num_base_bdevs_discovered": 2, 00:27:15.873 "num_base_bdevs_operational": 2, 00:27:15.873 "base_bdevs_list": [ 00:27:15.873 { 00:27:15.873 "name": "spare", 00:27:15.873 "uuid": "ec64b69d-2d59-5f89-8354-73c4d0431ab5", 00:27:15.873 "is_configured": true, 00:27:15.873 "data_offset": 0, 00:27:15.873 "data_size": 65536 00:27:15.873 }, 00:27:15.873 { 00:27:15.873 "name": "BaseBdev2", 00:27:15.873 "uuid": "211130ed-052a-5b42-8dad-6b7fb9c512ac", 00:27:15.873 "is_configured": true, 00:27:15.873 "data_offset": 0, 00:27:15.873 "data_size": 65536 00:27:15.873 } 00:27:15.873 ] 00:27:15.873 }' 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.873 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:16.441 83.00 IOPS, 249.00 MiB/s [2024-11-20T07:24:40.730Z] 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:16.441 [2024-11-20 07:24:40.575182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:16.441 [2024-11-20 07:24:40.575409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:16.441 00:27:16.441 Latency(us) 00:27:16.441 [2024-11-20T07:24:40.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.441 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:16.441 raid_bdev1 : 8.12 81.98 245.94 0.00 0.00 16534.05 268.10 134408.38 00:27:16.441 [2024-11-20T07:24:40.730Z] =================================================================================================================== 00:27:16.441 [2024-11-20T07:24:40.730Z] Total : 81.98 245.94 0.00 0.00 16534.05 268.10 134408.38 00:27:16.441 [2024-11-20 07:24:40.645441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.441 [2024-11-20 07:24:40.645646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:16.441 [2024-11-20 07:24:40.645789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:16.441 [2024-11-20 07:24:40.646014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:16.441 { 00:27:16.441 "results": [ 00:27:16.441 { 00:27:16.441 "job": "raid_bdev1", 00:27:16.441 "core_mask": "0x1", 00:27:16.441 "workload": "randrw", 00:27:16.441 "percentage": 50, 00:27:16.441 "status": "finished", 00:27:16.441 "queue_depth": 2, 00:27:16.441 "io_size": 3145728, 00:27:16.441 "runtime": 8.123985, 00:27:16.441 "iops": 81.97947189710469, 00:27:16.441 "mibps": 245.93841569131405, 00:27:16.441 "io_failed": 0, 00:27:16.441 "io_timeout": 0, 00:27:16.441 "avg_latency_us": 16534.045492765494, 00:27:16.441 "min_latency_us": 268.1018181818182, 00:27:16.441 "max_latency_us": 134408.37818181817 00:27:16.441 } 00:27:16.441 ], 00:27:16.441 "core_count": 1 00:27:16.441 } 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:16.441 07:24:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:27:17.010 /dev/nbd0 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:17.010 1+0 records in 00:27:17.010 1+0 records out 00:27:17.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388891 s, 10.5 MB/s 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:17.010 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:17.269 /dev/nbd1 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:17.269 1+0 records in 00:27:17.269 1+0 records out 00:27:17.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299029 s, 13.7 MB/s 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:17.269 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:17.528 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:17.787 07:24:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76902 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76902 ']' 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76902 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76902 00:27:18.046 killing process with pid 76902 00:27:18.046 Received shutdown signal, test time was about 9.748678 seconds 00:27:18.046 00:27:18.046 Latency(us) 00:27:18.046 [2024-11-20T07:24:42.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.046 [2024-11-20T07:24:42.335Z] =================================================================================================================== 00:27:18.046 [2024-11-20T07:24:42.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76902' 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76902 00:27:18.046 07:24:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76902 00:27:18.046 [2024-11-20 07:24:42.251507] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:18.304 [2024-11-20 07:24:42.456527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.682 ************************************ 00:27:19.682 END TEST raid_rebuild_test_io 00:27:19.682 ************************************ 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:27:19.682 00:27:19.682 real 0m13.161s 00:27:19.682 user 0m17.420s 00:27:19.682 sys 0m1.467s 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.682 07:24:43 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:27:19.682 07:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:19.682 07:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.682 07:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.682 ************************************ 00:27:19.682 START TEST raid_rebuild_test_sb_io 00:27:19.682 ************************************ 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77289 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77289 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77289 ']' 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.682 07:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.682 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:19.682 Zero copy mechanism will not be used. 00:27:19.682 [2024-11-20 07:24:43.707905] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:19.682 [2024-11-20 07:24:43.708116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77289 ] 00:27:19.682 [2024-11-20 07:24:43.892904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.941 [2024-11-20 07:24:44.023019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.941 [2024-11-20 07:24:44.227692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:19.941 [2024-11-20 07:24:44.227734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.509 BaseBdev1_malloc 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.509 [2024-11-20 07:24:44.677293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:20.509 [2024-11-20 07:24:44.677393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.509 [2024-11-20 07:24:44.677429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:20.509 [2024-11-20 07:24:44.677449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.509 [2024-11-20 07:24:44.680338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.509 [2024-11-20 07:24:44.680402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:20.509 BaseBdev1 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.509 BaseBdev2_malloc 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.509 [2024-11-20 07:24:44.733375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:20.509 [2024-11-20 07:24:44.733470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.509 [2024-11-20 07:24:44.733501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:20.509 [2024-11-20 07:24:44.733522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.509 [2024-11-20 07:24:44.736612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.509 [2024-11-20 07:24:44.736668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:20.509 BaseBdev2 00:27:20.509 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.510 spare_malloc 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.510 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.768 spare_delay 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.768 [2024-11-20 07:24:44.811012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:20.768 [2024-11-20 07:24:44.811112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.768 [2024-11-20 07:24:44.811149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:20.768 [2024-11-20 07:24:44.811170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.768 [2024-11-20 07:24:44.814246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.768 [2024-11-20 07:24:44.814290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:20.768 spare 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.768 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.768 [2024-11-20 07:24:44.819260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:20.768 [2024-11-20 07:24:44.821913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:20.768 [2024-11-20 07:24:44.822212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:20.768 [2024-11-20 07:24:44.822239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:20.768 [2024-11-20 07:24:44.822571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:20.768 [2024-11-20 07:24:44.822825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:20.769 [2024-11-20 07:24:44.822847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:20.769 [2024-11-20 07:24:44.823057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.769 "name": "raid_bdev1", 00:27:20.769 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:20.769 "strip_size_kb": 0, 00:27:20.769 "state": "online", 00:27:20.769 "raid_level": "raid1", 00:27:20.769 "superblock": true, 00:27:20.769 "num_base_bdevs": 2, 00:27:20.769 "num_base_bdevs_discovered": 2, 00:27:20.769 "num_base_bdevs_operational": 2, 00:27:20.769 "base_bdevs_list": [ 00:27:20.769 { 00:27:20.769 "name": "BaseBdev1", 00:27:20.769 "uuid": "ecf9c6e3-5a91-5fd8-979c-5a4c44df256e", 00:27:20.769 "is_configured": true, 00:27:20.769 "data_offset": 2048, 00:27:20.769 "data_size": 63488 00:27:20.769 }, 00:27:20.769 { 00:27:20.769 "name": "BaseBdev2", 00:27:20.769 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:20.769 "is_configured": true, 00:27:20.769 "data_offset": 2048, 00:27:20.769 "data_size": 63488 00:27:20.769 } 00:27:20.769 ] 00:27:20.769 }' 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.769 07:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:21.336 [2024-11-20 07:24:45.347787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.336 [2024-11-20 07:24:45.451370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.336 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:21.337 "name": "raid_bdev1", 00:27:21.337 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:21.337 "strip_size_kb": 0, 00:27:21.337 "state": "online", 00:27:21.337 "raid_level": "raid1", 00:27:21.337 "superblock": true, 00:27:21.337 "num_base_bdevs": 2, 00:27:21.337 "num_base_bdevs_discovered": 1, 00:27:21.337 "num_base_bdevs_operational": 1, 00:27:21.337 "base_bdevs_list": [ 00:27:21.337 { 00:27:21.337 "name": null, 00:27:21.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.337 "is_configured": false, 00:27:21.337 "data_offset": 0, 00:27:21.337 "data_size": 63488 00:27:21.337 }, 00:27:21.337 { 00:27:21.337 "name": "BaseBdev2", 00:27:21.337 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:21.337 "is_configured": true, 00:27:21.337 "data_offset": 2048, 00:27:21.337 "data_size": 63488 00:27:21.337 } 00:27:21.337 ] 00:27:21.337 }' 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:21.337 07:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.337 [2024-11-20 07:24:45.560034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:21.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:21.337 Zero copy mechanism will not be used. 00:27:21.337 Running I/O for 60 seconds... 00:27:21.904 07:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:21.904 07:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.904 07:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.904 [2024-11-20 07:24:46.058196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:21.904 07:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.905 07:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:21.905 [2024-11-20 07:24:46.119779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:21.905 [2024-11-20 07:24:46.122219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:22.163 [2024-11-20 07:24:46.233354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:22.163 [2024-11-20 07:24:46.234023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:22.422 [2024-11-20 07:24:46.452959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:22.422 [2024-11-20 07:24:46.453380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:22.681 199.00 IOPS, 597.00 MiB/s [2024-11-20T07:24:46.970Z] [2024-11-20 07:24:46.805267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:22.940 [2024-11-20 07:24:47.041624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:22.940 [2024-11-20 07:24:47.042051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:22.940 "name": "raid_bdev1", 00:27:22.940 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:22.940 "strip_size_kb": 0, 00:27:22.940 "state": "online", 00:27:22.940 "raid_level": "raid1", 00:27:22.940 "superblock": true, 00:27:22.940 "num_base_bdevs": 2, 00:27:22.940 "num_base_bdevs_discovered": 2, 00:27:22.940 "num_base_bdevs_operational": 2, 00:27:22.940 "process": { 00:27:22.940 "type": "rebuild", 00:27:22.940 "target": "spare", 00:27:22.940 "progress": { 00:27:22.940 "blocks": 10240, 00:27:22.940 "percent": 16 00:27:22.940 } 00:27:22.940 }, 00:27:22.940 "base_bdevs_list": [ 00:27:22.940 { 00:27:22.940 "name": "spare", 00:27:22.940 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:22.940 "is_configured": true, 00:27:22.940 "data_offset": 2048, 00:27:22.940 "data_size": 63488 00:27:22.940 }, 00:27:22.940 { 00:27:22.940 "name": "BaseBdev2", 00:27:22.940 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:22.940 "is_configured": true, 00:27:22.940 "data_offset": 2048, 00:27:22.940 "data_size": 63488 00:27:22.940 } 00:27:22.940 ] 00:27:22.940 }' 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.940 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:23.200 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.200 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:23.200 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.200 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:23.200 [2024-11-20 07:24:47.277294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:23.200 [2024-11-20 07:24:47.385742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:23.200 [2024-11-20 07:24:47.386404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:23.459 [2024-11-20 07:24:47.498721] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:23.459 [2024-11-20 07:24:47.509461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.459 [2024-11-20 07:24:47.509530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:23.459 [2024-11-20 07:24:47.509547] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:23.459 137.00 IOPS, 411.00 MiB/s [2024-11-20T07:24:47.748Z] [2024-11-20 07:24:47.569975] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.459 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.459 "name": "raid_bdev1", 00:27:23.459 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:23.459 "strip_size_kb": 0, 00:27:23.459 "state": "online", 00:27:23.460 "raid_level": "raid1", 00:27:23.460 "superblock": true, 00:27:23.460 "num_base_bdevs": 2, 00:27:23.460 "num_base_bdevs_discovered": 1, 00:27:23.460 "num_base_bdevs_operational": 1, 00:27:23.460 "base_bdevs_list": [ 00:27:23.460 { 00:27:23.460 "name": null, 00:27:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.460 "is_configured": false, 00:27:23.460 "data_offset": 0, 00:27:23.460 "data_size": 63488 00:27:23.460 }, 00:27:23.460 { 00:27:23.460 "name": "BaseBdev2", 00:27:23.460 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:23.460 "is_configured": true, 00:27:23.460 "data_offset": 2048, 00:27:23.460 "data_size": 63488 00:27:23.460 } 00:27:23.460 ] 00:27:23.460 }' 00:27:23.460 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.460 07:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:24.026 "name": "raid_bdev1", 00:27:24.026 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:24.026 "strip_size_kb": 0, 00:27:24.026 "state": "online", 00:27:24.026 "raid_level": "raid1", 00:27:24.026 "superblock": true, 00:27:24.026 "num_base_bdevs": 2, 00:27:24.026 "num_base_bdevs_discovered": 1, 00:27:24.026 "num_base_bdevs_operational": 1, 00:27:24.026 "base_bdevs_list": [ 00:27:24.026 { 00:27:24.026 "name": null, 00:27:24.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.026 "is_configured": false, 00:27:24.026 "data_offset": 0, 00:27:24.026 "data_size": 63488 00:27:24.026 }, 00:27:24.026 { 00:27:24.026 "name": "BaseBdev2", 00:27:24.026 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:24.026 "is_configured": true, 00:27:24.026 "data_offset": 2048, 00:27:24.026 "data_size": 63488 00:27:24.026 } 00:27:24.026 ] 00:27:24.026 }' 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:24.026 [2024-11-20 07:24:48.259725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.026 07:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:24.286 [2024-11-20 07:24:48.327977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:24.286 [2024-11-20 07:24:48.330478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.286 [2024-11-20 07:24:48.449777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:24.286 [2024-11-20 07:24:48.450373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:24.544 170.33 IOPS, 511.00 MiB/s [2024-11-20T07:24:48.833Z] [2024-11-20 07:24:48.668351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:24.544 [2024-11-20 07:24:48.668843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:24.815 [2024-11-20 07:24:49.018010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:24.815 [2024-11-20 07:24:49.018789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:25.077 [2024-11-20 07:24:49.231020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:25.077 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:25.335 "name": "raid_bdev1", 00:27:25.335 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:25.335 "strip_size_kb": 0, 00:27:25.335 "state": "online", 00:27:25.335 "raid_level": "raid1", 00:27:25.335 "superblock": true, 00:27:25.335 "num_base_bdevs": 2, 00:27:25.335 "num_base_bdevs_discovered": 2, 00:27:25.335 "num_base_bdevs_operational": 2, 00:27:25.335 "process": { 00:27:25.335 "type": "rebuild", 00:27:25.335 "target": "spare", 00:27:25.335 "progress": { 00:27:25.335 "blocks": 10240, 00:27:25.335 "percent": 16 00:27:25.335 } 00:27:25.335 }, 00:27:25.335 "base_bdevs_list": [ 00:27:25.335 { 00:27:25.335 "name": "spare", 00:27:25.335 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:25.335 "is_configured": true, 00:27:25.335 "data_offset": 2048, 00:27:25.335 "data_size": 63488 00:27:25.335 }, 00:27:25.335 { 00:27:25.335 "name": "BaseBdev2", 00:27:25.335 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:25.335 "is_configured": true, 00:27:25.335 "data_offset": 2048, 00:27:25.335 "data_size": 63488 00:27:25.335 } 00:27:25.335 ] 00:27:25.335 }' 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:25.335 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.335 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:25.336 "name": "raid_bdev1", 00:27:25.336 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:25.336 "strip_size_kb": 0, 00:27:25.336 "state": "online", 00:27:25.336 "raid_level": "raid1", 00:27:25.336 "superblock": true, 00:27:25.336 "num_base_bdevs": 2, 00:27:25.336 "num_base_bdevs_discovered": 2, 00:27:25.336 "num_base_bdevs_operational": 2, 00:27:25.336 "process": { 00:27:25.336 "type": "rebuild", 00:27:25.336 "target": "spare", 00:27:25.336 "progress": { 00:27:25.336 "blocks": 12288, 00:27:25.336 "percent": 19 00:27:25.336 } 00:27:25.336 }, 00:27:25.336 "base_bdevs_list": [ 00:27:25.336 { 00:27:25.336 "name": "spare", 00:27:25.336 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:25.336 "is_configured": true, 00:27:25.336 "data_offset": 2048, 00:27:25.336 "data_size": 63488 00:27:25.336 }, 00:27:25.336 { 00:27:25.336 "name": "BaseBdev2", 00:27:25.336 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:25.336 "is_configured": true, 00:27:25.336 "data_offset": 2048, 00:27:25.336 "data_size": 63488 00:27:25.336 } 00:27:25.336 ] 00:27:25.336 }' 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:25.336 [2024-11-20 07:24:49.563718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:25.336 145.75 IOPS, 437.25 MiB/s [2024-11-20T07:24:49.625Z] 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.336 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:25.595 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.595 07:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:25.595 [2024-11-20 07:24:49.678511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:25.595 [2024-11-20 07:24:49.679022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:25.854 [2024-11-20 07:24:49.906223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:26.112 [2024-11-20 07:24:50.389056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:26.112 [2024-11-20 07:24:50.389420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:26.372 128.40 IOPS, 385.20 MiB/s [2024-11-20T07:24:50.661Z] 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.372 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:26.630 "name": "raid_bdev1", 00:27:26.630 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:26.630 "strip_size_kb": 0, 00:27:26.630 "state": "online", 00:27:26.630 "raid_level": "raid1", 00:27:26.630 "superblock": true, 00:27:26.630 "num_base_bdevs": 2, 00:27:26.630 "num_base_bdevs_discovered": 2, 00:27:26.630 "num_base_bdevs_operational": 2, 00:27:26.630 "process": { 00:27:26.630 "type": "rebuild", 00:27:26.630 "target": "spare", 00:27:26.630 "progress": { 00:27:26.630 "blocks": 30720, 00:27:26.630 "percent": 48 00:27:26.630 } 00:27:26.630 }, 00:27:26.630 "base_bdevs_list": [ 00:27:26.630 { 00:27:26.630 "name": "spare", 00:27:26.630 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:26.630 "is_configured": true, 00:27:26.630 "data_offset": 2048, 00:27:26.630 "data_size": 63488 00:27:26.630 }, 00:27:26.630 { 00:27:26.630 "name": "BaseBdev2", 00:27:26.630 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:26.630 "is_configured": true, 00:27:26.630 "data_offset": 2048, 00:27:26.630 "data_size": 63488 00:27:26.630 } 00:27:26.630 ] 00:27:26.630 }' 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:26.630 [2024-11-20 07:24:50.714622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.630 07:24:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:26.630 [2024-11-20 07:24:50.864460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:27.196 [2024-11-20 07:24:51.215985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:27.196 [2024-11-20 07:24:51.418005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:27.196 [2024-11-20 07:24:51.418447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:27.714 113.67 IOPS, 341.00 MiB/s [2024-11-20T07:24:52.003Z] [2024-11-20 07:24:51.784709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:27.714 "name": "raid_bdev1", 00:27:27.714 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:27.714 "strip_size_kb": 0, 00:27:27.714 "state": "online", 00:27:27.714 "raid_level": "raid1", 00:27:27.714 "superblock": true, 00:27:27.714 "num_base_bdevs": 2, 00:27:27.714 "num_base_bdevs_discovered": 2, 00:27:27.714 "num_base_bdevs_operational": 2, 00:27:27.714 "process": { 00:27:27.714 "type": "rebuild", 00:27:27.714 "target": "spare", 00:27:27.714 "progress": { 00:27:27.714 "blocks": 45056, 00:27:27.714 "percent": 70 00:27:27.714 } 00:27:27.714 }, 00:27:27.714 "base_bdevs_list": [ 00:27:27.714 { 00:27:27.714 "name": "spare", 00:27:27.714 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:27.714 "is_configured": true, 00:27:27.714 "data_offset": 2048, 00:27:27.714 "data_size": 63488 00:27:27.714 }, 00:27:27.714 { 00:27:27.714 "name": "BaseBdev2", 00:27:27.714 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:27.714 "is_configured": true, 00:27:27.714 "data_offset": 2048, 00:27:27.714 "data_size": 63488 00:27:27.714 } 00:27:27.714 ] 00:27:27.714 }' 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.714 07:24:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:28.282 [2024-11-20 07:24:52.364537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:28.541 101.43 IOPS, 304.29 MiB/s [2024-11-20T07:24:52.830Z] [2024-11-20 07:24:52.694591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:28.801 [2024-11-20 07:24:52.916781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.801 07:24:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.801 07:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:28.801 "name": "raid_bdev1", 00:27:28.801 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:28.801 "strip_size_kb": 0, 00:27:28.801 "state": "online", 00:27:28.801 "raid_level": "raid1", 00:27:28.801 "superblock": true, 00:27:28.801 "num_base_bdevs": 2, 00:27:28.801 "num_base_bdevs_discovered": 2, 00:27:28.801 "num_base_bdevs_operational": 2, 00:27:28.801 "process": { 00:27:28.801 "type": "rebuild", 00:27:28.801 "target": "spare", 00:27:28.801 "progress": { 00:27:28.801 "blocks": 59392, 00:27:28.801 "percent": 93 00:27:28.801 } 00:27:28.801 }, 00:27:28.801 "base_bdevs_list": [ 00:27:28.801 { 00:27:28.801 "name": "spare", 00:27:28.801 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:28.801 "is_configured": true, 00:27:28.801 "data_offset": 2048, 00:27:28.801 "data_size": 63488 00:27:28.801 }, 00:27:28.801 { 00:27:28.801 "name": "BaseBdev2", 00:27:28.801 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:28.801 "is_configured": true, 00:27:28.801 "data_offset": 2048, 00:27:28.801 "data_size": 63488 00:27:28.801 } 00:27:28.801 ] 00:27:28.801 }' 00:27:28.801 07:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:28.801 07:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.801 07:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:29.060 07:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.060 07:24:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:29.060 [2024-11-20 07:24:53.239732] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:29.060 [2024-11-20 07:24:53.339841] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:29.060 [2024-11-20 07:24:53.342478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.888 92.75 IOPS, 278.25 MiB/s [2024-11-20T07:24:54.177Z] 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:29.888 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:30.148 "name": "raid_bdev1", 00:27:30.148 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:30.148 "strip_size_kb": 0, 00:27:30.148 "state": "online", 00:27:30.148 "raid_level": "raid1", 00:27:30.148 "superblock": true, 00:27:30.148 "num_base_bdevs": 2, 00:27:30.148 "num_base_bdevs_discovered": 2, 00:27:30.148 "num_base_bdevs_operational": 2, 00:27:30.148 "base_bdevs_list": [ 00:27:30.148 { 00:27:30.148 "name": "spare", 00:27:30.148 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:30.148 "is_configured": true, 00:27:30.148 "data_offset": 2048, 00:27:30.148 "data_size": 63488 00:27:30.148 }, 00:27:30.148 { 00:27:30.148 "name": "BaseBdev2", 00:27:30.148 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:30.148 "is_configured": true, 00:27:30.148 "data_offset": 2048, 00:27:30.148 "data_size": 63488 00:27:30.148 } 00:27:30.148 ] 00:27:30.148 }' 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:30.148 "name": "raid_bdev1", 00:27:30.148 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:30.148 "strip_size_kb": 0, 00:27:30.148 "state": "online", 00:27:30.148 "raid_level": "raid1", 00:27:30.148 "superblock": true, 00:27:30.148 "num_base_bdevs": 2, 00:27:30.148 "num_base_bdevs_discovered": 2, 00:27:30.148 "num_base_bdevs_operational": 2, 00:27:30.148 "base_bdevs_list": [ 00:27:30.148 { 00:27:30.148 "name": "spare", 00:27:30.148 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:30.148 "is_configured": true, 00:27:30.148 "data_offset": 2048, 00:27:30.148 "data_size": 63488 00:27:30.148 }, 00:27:30.148 { 00:27:30.148 "name": "BaseBdev2", 00:27:30.148 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:30.148 "is_configured": true, 00:27:30.148 "data_offset": 2048, 00:27:30.148 "data_size": 63488 00:27:30.148 } 00:27:30.148 ] 00:27:30.148 }' 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:30.148 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:30.149 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:30.409 "name": "raid_bdev1", 00:27:30.409 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:30.409 "strip_size_kb": 0, 00:27:30.409 "state": "online", 00:27:30.409 "raid_level": "raid1", 00:27:30.409 "superblock": true, 00:27:30.409 "num_base_bdevs": 2, 00:27:30.409 "num_base_bdevs_discovered": 2, 00:27:30.409 "num_base_bdevs_operational": 2, 00:27:30.409 "base_bdevs_list": [ 00:27:30.409 { 00:27:30.409 "name": "spare", 00:27:30.409 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:30.409 "is_configured": true, 00:27:30.409 "data_offset": 2048, 00:27:30.409 "data_size": 63488 00:27:30.409 }, 00:27:30.409 { 00:27:30.409 "name": "BaseBdev2", 00:27:30.409 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:30.409 "is_configured": true, 00:27:30.409 "data_offset": 2048, 00:27:30.409 "data_size": 63488 00:27:30.409 } 00:27:30.409 ] 00:27:30.409 }' 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:30.409 07:24:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.977 86.44 IOPS, 259.33 MiB/s [2024-11-20T07:24:55.266Z] 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.977 [2024-11-20 07:24:55.052241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.977 [2024-11-20 07:24:55.052285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:30.977 00:27:30.977 Latency(us) 00:27:30.977 [2024-11-20T07:24:55.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.977 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:30.977 raid_bdev1 : 9.59 82.49 247.46 0.00 0.00 16102.20 269.96 129642.12 00:27:30.977 [2024-11-20T07:24:55.266Z] =================================================================================================================== 00:27:30.977 [2024-11-20T07:24:55.266Z] Total : 82.49 247.46 0.00 0.00 16102.20 269.96 129642.12 00:27:30.977 [2024-11-20 07:24:55.171958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:30.977 [2024-11-20 07:24:55.172084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:30.977 [2024-11-20 07:24:55.172216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:30.977 [2024-11-20 07:24:55.172238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:30.977 { 00:27:30.977 "results": [ 00:27:30.977 { 00:27:30.977 "job": "raid_bdev1", 00:27:30.977 "core_mask": "0x1", 00:27:30.977 "workload": "randrw", 00:27:30.977 "percentage": 50, 00:27:30.977 "status": "finished", 00:27:30.977 "queue_depth": 2, 00:27:30.977 "io_size": 3145728, 00:27:30.977 "runtime": 9.589451, 00:27:30.977 "iops": 82.48647393891476, 00:27:30.977 "mibps": 247.45942181674428, 00:27:30.977 "io_failed": 0, 00:27:30.977 "io_timeout": 0, 00:27:30.977 "avg_latency_us": 16102.204050109183, 00:27:30.977 "min_latency_us": 269.96363636363634, 00:27:30.977 "max_latency_us": 129642.12363636364 00:27:30.977 } 00:27:30.977 ], 00:27:30.977 "core_count": 1 00:27:30.977 } 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:30.977 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:27:31.545 /dev/nbd0 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:31.545 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:31.545 1+0 records in 00:27:31.545 1+0 records out 00:27:31.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530413 s, 7.7 MB/s 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:31.546 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:31.805 /dev/nbd1 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:31.805 1+0 records in 00:27:31.805 1+0 records out 00:27:31.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409183 s, 10.0 MB/s 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:31.805 07:24:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.065 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.324 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.583 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.583 [2024-11-20 07:24:56.744143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:32.583 [2024-11-20 07:24:56.744220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.583 [2024-11-20 07:24:56.744253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:32.583 [2024-11-20 07:24:56.744273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.583 [2024-11-20 07:24:56.747346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.583 [2024-11-20 07:24:56.747398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:32.583 [2024-11-20 07:24:56.747529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:32.584 [2024-11-20 07:24:56.747632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:32.584 [2024-11-20 07:24:56.747817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:32.584 spare 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.584 [2024-11-20 07:24:56.847981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:32.584 [2024-11-20 07:24:56.848027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:32.584 [2024-11-20 07:24:56.848495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:27:32.584 [2024-11-20 07:24:56.848790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:32.584 [2024-11-20 07:24:56.848824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:32.584 [2024-11-20 07:24:56.849089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.584 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.842 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.842 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.842 "name": "raid_bdev1", 00:27:32.843 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:32.843 "strip_size_kb": 0, 00:27:32.843 "state": "online", 00:27:32.843 "raid_level": "raid1", 00:27:32.843 "superblock": true, 00:27:32.843 "num_base_bdevs": 2, 00:27:32.843 "num_base_bdevs_discovered": 2, 00:27:32.843 "num_base_bdevs_operational": 2, 00:27:32.843 "base_bdevs_list": [ 00:27:32.843 { 00:27:32.843 "name": "spare", 00:27:32.843 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:32.843 "is_configured": true, 00:27:32.843 "data_offset": 2048, 00:27:32.843 "data_size": 63488 00:27:32.843 }, 00:27:32.843 { 00:27:32.843 "name": "BaseBdev2", 00:27:32.843 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:32.843 "is_configured": true, 00:27:32.843 "data_offset": 2048, 00:27:32.843 "data_size": 63488 00:27:32.843 } 00:27:32.843 ] 00:27:32.843 }' 00:27:32.843 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.843 07:24:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:33.411 "name": "raid_bdev1", 00:27:33.411 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:33.411 "strip_size_kb": 0, 00:27:33.411 "state": "online", 00:27:33.411 "raid_level": "raid1", 00:27:33.411 "superblock": true, 00:27:33.411 "num_base_bdevs": 2, 00:27:33.411 "num_base_bdevs_discovered": 2, 00:27:33.411 "num_base_bdevs_operational": 2, 00:27:33.411 "base_bdevs_list": [ 00:27:33.411 { 00:27:33.411 "name": "spare", 00:27:33.411 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:33.411 "is_configured": true, 00:27:33.411 "data_offset": 2048, 00:27:33.411 "data_size": 63488 00:27:33.411 }, 00:27:33.411 { 00:27:33.411 "name": "BaseBdev2", 00:27:33.411 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:33.411 "is_configured": true, 00:27:33.411 "data_offset": 2048, 00:27:33.411 "data_size": 63488 00:27:33.411 } 00:27:33.411 ] 00:27:33.411 }' 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.411 [2024-11-20 07:24:57.677447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.411 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.742 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.742 "name": "raid_bdev1", 00:27:33.742 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:33.742 "strip_size_kb": 0, 00:27:33.742 "state": "online", 00:27:33.742 "raid_level": "raid1", 00:27:33.742 "superblock": true, 00:27:33.742 "num_base_bdevs": 2, 00:27:33.742 "num_base_bdevs_discovered": 1, 00:27:33.742 "num_base_bdevs_operational": 1, 00:27:33.742 "base_bdevs_list": [ 00:27:33.742 { 00:27:33.742 "name": null, 00:27:33.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.742 "is_configured": false, 00:27:33.742 "data_offset": 0, 00:27:33.742 "data_size": 63488 00:27:33.742 }, 00:27:33.742 { 00:27:33.742 "name": "BaseBdev2", 00:27:33.742 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:33.742 "is_configured": true, 00:27:33.742 "data_offset": 2048, 00:27:33.742 "data_size": 63488 00:27:33.742 } 00:27:33.742 ] 00:27:33.742 }' 00:27:33.742 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.742 07:24:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.002 07:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:34.002 07:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.002 07:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.002 [2024-11-20 07:24:58.189716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:34.002 [2024-11-20 07:24:58.190011] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:34.002 [2024-11-20 07:24:58.190042] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:34.002 [2024-11-20 07:24:58.190089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:34.002 [2024-11-20 07:24:58.206041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:27:34.002 07:24:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.002 07:24:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:34.002 [2024-11-20 07:24:58.208623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.939 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:35.197 "name": "raid_bdev1", 00:27:35.197 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:35.197 "strip_size_kb": 0, 00:27:35.197 "state": "online", 00:27:35.197 "raid_level": "raid1", 00:27:35.197 "superblock": true, 00:27:35.197 "num_base_bdevs": 2, 00:27:35.197 "num_base_bdevs_discovered": 2, 00:27:35.197 "num_base_bdevs_operational": 2, 00:27:35.197 "process": { 00:27:35.197 "type": "rebuild", 00:27:35.197 "target": "spare", 00:27:35.197 "progress": { 00:27:35.197 "blocks": 20480, 00:27:35.197 "percent": 32 00:27:35.197 } 00:27:35.197 }, 00:27:35.197 "base_bdevs_list": [ 00:27:35.197 { 00:27:35.197 "name": "spare", 00:27:35.197 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:35.197 "is_configured": true, 00:27:35.197 "data_offset": 2048, 00:27:35.197 "data_size": 63488 00:27:35.197 }, 00:27:35.197 { 00:27:35.197 "name": "BaseBdev2", 00:27:35.197 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:35.197 "is_configured": true, 00:27:35.197 "data_offset": 2048, 00:27:35.197 "data_size": 63488 00:27:35.197 } 00:27:35.197 ] 00:27:35.197 }' 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.197 [2024-11-20 07:24:59.373872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:35.197 [2024-11-20 07:24:59.417744] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:35.197 [2024-11-20 07:24:59.417833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.197 [2024-11-20 07:24:59.417870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:35.197 [2024-11-20 07:24:59.417882] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.197 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.456 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.456 "name": "raid_bdev1", 00:27:35.456 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:35.456 "strip_size_kb": 0, 00:27:35.456 "state": "online", 00:27:35.456 "raid_level": "raid1", 00:27:35.456 "superblock": true, 00:27:35.456 "num_base_bdevs": 2, 00:27:35.456 "num_base_bdevs_discovered": 1, 00:27:35.456 "num_base_bdevs_operational": 1, 00:27:35.456 "base_bdevs_list": [ 00:27:35.456 { 00:27:35.456 "name": null, 00:27:35.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.456 "is_configured": false, 00:27:35.456 "data_offset": 0, 00:27:35.456 "data_size": 63488 00:27:35.456 }, 00:27:35.456 { 00:27:35.456 "name": "BaseBdev2", 00:27:35.456 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:35.456 "is_configured": true, 00:27:35.456 "data_offset": 2048, 00:27:35.456 "data_size": 63488 00:27:35.456 } 00:27:35.456 ] 00:27:35.456 }' 00:27:35.456 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.456 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.715 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:35.715 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.715 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.715 [2024-11-20 07:24:59.981015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.715 [2024-11-20 07:24:59.981114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.715 [2024-11-20 07:24:59.981153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:35.715 [2024-11-20 07:24:59.981185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.715 [2024-11-20 07:24:59.981818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.715 [2024-11-20 07:24:59.981854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.715 [2024-11-20 07:24:59.981984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:35.715 [2024-11-20 07:24:59.982014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:35.715 [2024-11-20 07:24:59.982033] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:35.715 [2024-11-20 07:24:59.982075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:35.715 [2024-11-20 07:24:59.998309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:27:35.715 spare 00:27:35.715 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.715 07:24:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:35.715 [2024-11-20 07:25:00.000813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.091 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:37.091 "name": "raid_bdev1", 00:27:37.091 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:37.091 "strip_size_kb": 0, 00:27:37.091 "state": "online", 00:27:37.091 "raid_level": "raid1", 00:27:37.091 "superblock": true, 00:27:37.091 "num_base_bdevs": 2, 00:27:37.091 "num_base_bdevs_discovered": 2, 00:27:37.091 "num_base_bdevs_operational": 2, 00:27:37.091 "process": { 00:27:37.091 "type": "rebuild", 00:27:37.091 "target": "spare", 00:27:37.091 "progress": { 00:27:37.092 "blocks": 20480, 00:27:37.092 "percent": 32 00:27:37.092 } 00:27:37.092 }, 00:27:37.092 "base_bdevs_list": [ 00:27:37.092 { 00:27:37.092 "name": "spare", 00:27:37.092 "uuid": "1b2a0acf-95bb-5583-b8f3-b1b9688945f6", 00:27:37.092 "is_configured": true, 00:27:37.092 "data_offset": 2048, 00:27:37.092 "data_size": 63488 00:27:37.092 }, 00:27:37.092 { 00:27:37.092 "name": "BaseBdev2", 00:27:37.092 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:37.092 "is_configured": true, 00:27:37.092 "data_offset": 2048, 00:27:37.092 "data_size": 63488 00:27:37.092 } 00:27:37.092 ] 00:27:37.092 }' 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.092 [2024-11-20 07:25:01.174178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:37.092 [2024-11-20 07:25:01.210068] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:37.092 [2024-11-20 07:25:01.210165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:37.092 [2024-11-20 07:25:01.210189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:37.092 [2024-11-20 07:25:01.210203] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.092 "name": "raid_bdev1", 00:27:37.092 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:37.092 "strip_size_kb": 0, 00:27:37.092 "state": "online", 00:27:37.092 "raid_level": "raid1", 00:27:37.092 "superblock": true, 00:27:37.092 "num_base_bdevs": 2, 00:27:37.092 "num_base_bdevs_discovered": 1, 00:27:37.092 "num_base_bdevs_operational": 1, 00:27:37.092 "base_bdevs_list": [ 00:27:37.092 { 00:27:37.092 "name": null, 00:27:37.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.092 "is_configured": false, 00:27:37.092 "data_offset": 0, 00:27:37.092 "data_size": 63488 00:27:37.092 }, 00:27:37.092 { 00:27:37.092 "name": "BaseBdev2", 00:27:37.092 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:37.092 "is_configured": true, 00:27:37.092 "data_offset": 2048, 00:27:37.092 "data_size": 63488 00:27:37.092 } 00:27:37.092 ] 00:27:37.092 }' 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.092 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:37.659 "name": "raid_bdev1", 00:27:37.659 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:37.659 "strip_size_kb": 0, 00:27:37.659 "state": "online", 00:27:37.659 "raid_level": "raid1", 00:27:37.659 "superblock": true, 00:27:37.659 "num_base_bdevs": 2, 00:27:37.659 "num_base_bdevs_discovered": 1, 00:27:37.659 "num_base_bdevs_operational": 1, 00:27:37.659 "base_bdevs_list": [ 00:27:37.659 { 00:27:37.659 "name": null, 00:27:37.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.659 "is_configured": false, 00:27:37.659 "data_offset": 0, 00:27:37.659 "data_size": 63488 00:27:37.659 }, 00:27:37.659 { 00:27:37.659 "name": "BaseBdev2", 00:27:37.659 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:37.659 "is_configured": true, 00:27:37.659 "data_offset": 2048, 00:27:37.659 "data_size": 63488 00:27:37.659 } 00:27:37.659 ] 00:27:37.659 }' 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.659 [2024-11-20 07:25:01.941996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:37.659 [2024-11-20 07:25:01.942090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.659 [2024-11-20 07:25:01.942122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:37.659 [2024-11-20 07:25:01.942142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.659 [2024-11-20 07:25:01.942774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.659 [2024-11-20 07:25:01.942816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:37.659 [2024-11-20 07:25:01.942952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:37.659 [2024-11-20 07:25:01.942980] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:37.659 [2024-11-20 07:25:01.942992] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:37.659 [2024-11-20 07:25:01.943014] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:37.659 BaseBdev1 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.659 07:25:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:39.037 07:25:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.037 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:39.037 "name": "raid_bdev1", 00:27:39.037 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:39.037 "strip_size_kb": 0, 00:27:39.037 "state": "online", 00:27:39.037 "raid_level": "raid1", 00:27:39.037 "superblock": true, 00:27:39.037 "num_base_bdevs": 2, 00:27:39.037 "num_base_bdevs_discovered": 1, 00:27:39.037 "num_base_bdevs_operational": 1, 00:27:39.037 "base_bdevs_list": [ 00:27:39.037 { 00:27:39.037 "name": null, 00:27:39.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.037 "is_configured": false, 00:27:39.037 "data_offset": 0, 00:27:39.037 "data_size": 63488 00:27:39.037 }, 00:27:39.037 { 00:27:39.037 "name": "BaseBdev2", 00:27:39.037 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:39.037 "is_configured": true, 00:27:39.037 "data_offset": 2048, 00:27:39.037 "data_size": 63488 00:27:39.037 } 00:27:39.037 ] 00:27:39.037 }' 00:27:39.037 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:39.037 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:39.363 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:39.363 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:39.363 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:39.363 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:39.363 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:39.364 "name": "raid_bdev1", 00:27:39.364 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:39.364 "strip_size_kb": 0, 00:27:39.364 "state": "online", 00:27:39.364 "raid_level": "raid1", 00:27:39.364 "superblock": true, 00:27:39.364 "num_base_bdevs": 2, 00:27:39.364 "num_base_bdevs_discovered": 1, 00:27:39.364 "num_base_bdevs_operational": 1, 00:27:39.364 "base_bdevs_list": [ 00:27:39.364 { 00:27:39.364 "name": null, 00:27:39.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.364 "is_configured": false, 00:27:39.364 "data_offset": 0, 00:27:39.364 "data_size": 63488 00:27:39.364 }, 00:27:39.364 { 00:27:39.364 "name": "BaseBdev2", 00:27:39.364 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:39.364 "is_configured": true, 00:27:39.364 "data_offset": 2048, 00:27:39.364 "data_size": 63488 00:27:39.364 } 00:27:39.364 ] 00:27:39.364 }' 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:39.364 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:39.622 [2024-11-20 07:25:03.694729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:39.622 [2024-11-20 07:25:03.694966] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:39.622 [2024-11-20 07:25:03.694999] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:39.622 request: 00:27:39.622 { 00:27:39.622 "base_bdev": "BaseBdev1", 00:27:39.622 "raid_bdev": "raid_bdev1", 00:27:39.622 "method": "bdev_raid_add_base_bdev", 00:27:39.622 "req_id": 1 00:27:39.622 } 00:27:39.622 Got JSON-RPC error response 00:27:39.622 response: 00:27:39.622 { 00:27:39.622 "code": -22, 00:27:39.622 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:39.622 } 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:39.622 07:25:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.558 "name": "raid_bdev1", 00:27:40.558 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:40.558 "strip_size_kb": 0, 00:27:40.558 "state": "online", 00:27:40.558 "raid_level": "raid1", 00:27:40.558 "superblock": true, 00:27:40.558 "num_base_bdevs": 2, 00:27:40.558 "num_base_bdevs_discovered": 1, 00:27:40.558 "num_base_bdevs_operational": 1, 00:27:40.558 "base_bdevs_list": [ 00:27:40.558 { 00:27:40.558 "name": null, 00:27:40.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.558 "is_configured": false, 00:27:40.558 "data_offset": 0, 00:27:40.558 "data_size": 63488 00:27:40.558 }, 00:27:40.558 { 00:27:40.558 "name": "BaseBdev2", 00:27:40.558 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:40.558 "is_configured": true, 00:27:40.558 "data_offset": 2048, 00:27:40.558 "data_size": 63488 00:27:40.558 } 00:27:40.558 ] 00:27:40.558 }' 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.558 07:25:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:41.125 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:41.125 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:41.125 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:41.126 "name": "raid_bdev1", 00:27:41.126 "uuid": "064b3626-5110-472c-aaa0-4691d04bc077", 00:27:41.126 "strip_size_kb": 0, 00:27:41.126 "state": "online", 00:27:41.126 "raid_level": "raid1", 00:27:41.126 "superblock": true, 00:27:41.126 "num_base_bdevs": 2, 00:27:41.126 "num_base_bdevs_discovered": 1, 00:27:41.126 "num_base_bdevs_operational": 1, 00:27:41.126 "base_bdevs_list": [ 00:27:41.126 { 00:27:41.126 "name": null, 00:27:41.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.126 "is_configured": false, 00:27:41.126 "data_offset": 0, 00:27:41.126 "data_size": 63488 00:27:41.126 }, 00:27:41.126 { 00:27:41.126 "name": "BaseBdev2", 00:27:41.126 "uuid": "a0c9f18f-80a4-5944-aa3f-95a460d62c99", 00:27:41.126 "is_configured": true, 00:27:41.126 "data_offset": 2048, 00:27:41.126 "data_size": 63488 00:27:41.126 } 00:27:41.126 ] 00:27:41.126 }' 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77289 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77289 ']' 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77289 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.126 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77289 00:27:41.384 killing process with pid 77289 00:27:41.384 Received shutdown signal, test time was about 19.877964 seconds 00:27:41.384 00:27:41.384 Latency(us) 00:27:41.384 [2024-11-20T07:25:05.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.384 [2024-11-20T07:25:05.673Z] =================================================================================================================== 00:27:41.384 [2024-11-20T07:25:05.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.384 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.384 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.384 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77289' 00:27:41.384 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77289 00:27:41.384 07:25:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77289 00:27:41.384 [2024-11-20 07:25:05.440708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:41.384 [2024-11-20 07:25:05.440888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:41.384 [2024-11-20 07:25:05.440986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:41.384 [2024-11-20 07:25:05.441002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:41.384 [2024-11-20 07:25:05.658031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:42.761 07:25:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:27:42.761 00:27:42.761 real 0m23.074s 00:27:42.761 user 0m31.278s 00:27:42.761 sys 0m2.079s 00:27:42.761 07:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.761 07:25:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:42.761 ************************************ 00:27:42.761 END TEST raid_rebuild_test_sb_io 00:27:42.761 ************************************ 00:27:42.761 07:25:06 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:27:42.761 07:25:06 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:27:42.761 07:25:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:42.761 07:25:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.761 07:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:42.761 ************************************ 00:27:42.761 START TEST raid_rebuild_test 00:27:42.761 ************************************ 00:27:42.761 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:27:42.761 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:42.761 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78008 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78008 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78008 ']' 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.762 07:25:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.762 [2024-11-20 07:25:06.839050] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:27:42.762 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:42.762 Zero copy mechanism will not be used. 00:27:42.762 [2024-11-20 07:25:06.839239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78008 ] 00:27:42.762 [2024-11-20 07:25:07.023932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.020 [2024-11-20 07:25:07.149688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.279 [2024-11-20 07:25:07.358668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:43.279 [2024-11-20 07:25:07.358714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:43.538 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.538 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:27:43.538 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:43.538 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:43.538 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.538 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 BaseBdev1_malloc 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 [2024-11-20 07:25:07.873607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:43.796 [2024-11-20 07:25:07.873702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.796 [2024-11-20 07:25:07.873752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:43.796 [2024-11-20 07:25:07.873771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.796 [2024-11-20 07:25:07.876542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.796 [2024-11-20 07:25:07.876610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:43.796 BaseBdev1 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 BaseBdev2_malloc 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 [2024-11-20 07:25:07.932176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:43.796 [2024-11-20 07:25:07.932258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.796 [2024-11-20 07:25:07.932311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:43.796 [2024-11-20 07:25:07.932332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.796 [2024-11-20 07:25:07.935392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.796 [2024-11-20 07:25:07.935450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:43.796 BaseBdev2 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 BaseBdev3_malloc 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 [2024-11-20 07:25:07.995804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:43.796 [2024-11-20 07:25:07.995874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.796 [2024-11-20 07:25:07.995908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:43.796 [2024-11-20 07:25:07.995925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.796 [2024-11-20 07:25:07.998947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.796 [2024-11-20 07:25:07.998994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:43.796 BaseBdev3 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 BaseBdev4_malloc 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.796 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.796 [2024-11-20 07:25:08.054515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:43.797 [2024-11-20 07:25:08.054597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.797 [2024-11-20 07:25:08.054630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:43.797 [2024-11-20 07:25:08.054650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.797 [2024-11-20 07:25:08.057781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.797 [2024-11-20 07:25:08.057840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:43.797 BaseBdev4 00:27:43.797 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.797 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:43.797 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.797 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.055 spare_malloc 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.055 spare_delay 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.055 [2024-11-20 07:25:08.125133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:44.055 [2024-11-20 07:25:08.125220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.055 [2024-11-20 07:25:08.125252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:44.055 [2024-11-20 07:25:08.125270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.055 [2024-11-20 07:25:08.128330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.055 [2024-11-20 07:25:08.128388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:44.055 spare 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.055 [2024-11-20 07:25:08.137348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:44.055 [2024-11-20 07:25:08.140053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:44.055 [2024-11-20 07:25:08.140160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:44.055 [2024-11-20 07:25:08.140238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:44.055 [2024-11-20 07:25:08.140384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:44.055 [2024-11-20 07:25:08.140406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:44.055 [2024-11-20 07:25:08.140781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:44.055 [2024-11-20 07:25:08.141037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:44.055 [2024-11-20 07:25:08.141056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:44.055 [2024-11-20 07:25:08.141316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:44.055 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.056 "name": "raid_bdev1", 00:27:44.056 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:44.056 "strip_size_kb": 0, 00:27:44.056 "state": "online", 00:27:44.056 "raid_level": "raid1", 00:27:44.056 "superblock": false, 00:27:44.056 "num_base_bdevs": 4, 00:27:44.056 "num_base_bdevs_discovered": 4, 00:27:44.056 "num_base_bdevs_operational": 4, 00:27:44.056 "base_bdevs_list": [ 00:27:44.056 { 00:27:44.056 "name": "BaseBdev1", 00:27:44.056 "uuid": "3e738a3e-124f-59ec-8790-59abda1120bb", 00:27:44.056 "is_configured": true, 00:27:44.056 "data_offset": 0, 00:27:44.056 "data_size": 65536 00:27:44.056 }, 00:27:44.056 { 00:27:44.056 "name": "BaseBdev2", 00:27:44.056 "uuid": "663ef64c-66f6-5e57-b63b-db153046b750", 00:27:44.056 "is_configured": true, 00:27:44.056 "data_offset": 0, 00:27:44.056 "data_size": 65536 00:27:44.056 }, 00:27:44.056 { 00:27:44.056 "name": "BaseBdev3", 00:27:44.056 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:44.056 "is_configured": true, 00:27:44.056 "data_offset": 0, 00:27:44.056 "data_size": 65536 00:27:44.056 }, 00:27:44.056 { 00:27:44.056 "name": "BaseBdev4", 00:27:44.056 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:44.056 "is_configured": true, 00:27:44.056 "data_offset": 0, 00:27:44.056 "data_size": 65536 00:27:44.056 } 00:27:44.056 ] 00:27:44.056 }' 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.056 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:44.623 [2024-11-20 07:25:08.665918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:44.623 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:44.624 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:44.624 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:44.624 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:44.624 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:44.624 07:25:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:44.882 [2024-11-20 07:25:09.073693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:44.882 /dev/nbd0 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:44.882 1+0 records in 00:27:44.882 1+0 records out 00:27:44.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321559 s, 12.7 MB/s 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:44.882 07:25:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:52.999 65536+0 records in 00:27:52.999 65536+0 records out 00:27:52.999 33554432 bytes (34 MB, 32 MiB) copied, 7.97414 s, 4.2 MB/s 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:52.999 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:53.269 [2024-11-20 07:25:17.380883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.269 [2024-11-20 07:25:17.413370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.269 "name": "raid_bdev1", 00:27:53.269 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:53.269 "strip_size_kb": 0, 00:27:53.269 "state": "online", 00:27:53.269 "raid_level": "raid1", 00:27:53.269 "superblock": false, 00:27:53.269 "num_base_bdevs": 4, 00:27:53.269 "num_base_bdevs_discovered": 3, 00:27:53.269 "num_base_bdevs_operational": 3, 00:27:53.269 "base_bdevs_list": [ 00:27:53.269 { 00:27:53.269 "name": null, 00:27:53.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.269 "is_configured": false, 00:27:53.269 "data_offset": 0, 00:27:53.269 "data_size": 65536 00:27:53.269 }, 00:27:53.269 { 00:27:53.269 "name": "BaseBdev2", 00:27:53.269 "uuid": "663ef64c-66f6-5e57-b63b-db153046b750", 00:27:53.269 "is_configured": true, 00:27:53.269 "data_offset": 0, 00:27:53.269 "data_size": 65536 00:27:53.269 }, 00:27:53.269 { 00:27:53.269 "name": "BaseBdev3", 00:27:53.269 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:53.269 "is_configured": true, 00:27:53.269 "data_offset": 0, 00:27:53.269 "data_size": 65536 00:27:53.269 }, 00:27:53.269 { 00:27:53.269 "name": "BaseBdev4", 00:27:53.269 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:53.269 "is_configured": true, 00:27:53.269 "data_offset": 0, 00:27:53.269 "data_size": 65536 00:27:53.269 } 00:27:53.269 ] 00:27:53.269 }' 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.269 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.893 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:53.893 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.893 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.893 [2024-11-20 07:25:17.941571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:53.893 [2024-11-20 07:25:17.955527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:27:53.893 07:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.893 07:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:53.893 [2024-11-20 07:25:17.958437] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.831 07:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.831 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:54.831 "name": "raid_bdev1", 00:27:54.831 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:54.831 "strip_size_kb": 0, 00:27:54.831 "state": "online", 00:27:54.831 "raid_level": "raid1", 00:27:54.831 "superblock": false, 00:27:54.831 "num_base_bdevs": 4, 00:27:54.831 "num_base_bdevs_discovered": 4, 00:27:54.831 "num_base_bdevs_operational": 4, 00:27:54.831 "process": { 00:27:54.831 "type": "rebuild", 00:27:54.831 "target": "spare", 00:27:54.832 "progress": { 00:27:54.832 "blocks": 20480, 00:27:54.832 "percent": 31 00:27:54.832 } 00:27:54.832 }, 00:27:54.832 "base_bdevs_list": [ 00:27:54.832 { 00:27:54.832 "name": "spare", 00:27:54.832 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 }, 00:27:54.832 { 00:27:54.832 "name": "BaseBdev2", 00:27:54.832 "uuid": "663ef64c-66f6-5e57-b63b-db153046b750", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 }, 00:27:54.832 { 00:27:54.832 "name": "BaseBdev3", 00:27:54.832 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 }, 00:27:54.832 { 00:27:54.832 "name": "BaseBdev4", 00:27:54.832 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:54.832 "is_configured": true, 00:27:54.832 "data_offset": 0, 00:27:54.832 "data_size": 65536 00:27:54.832 } 00:27:54.832 ] 00:27:54.832 }' 00:27:54.832 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:54.832 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:54.832 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.103 [2024-11-20 07:25:19.131784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.103 [2024-11-20 07:25:19.167799] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:55.103 [2024-11-20 07:25:19.167880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.103 [2024-11-20 07:25:19.167905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.103 [2024-11-20 07:25:19.167920] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:55.103 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.104 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:55.104 "name": "raid_bdev1", 00:27:55.104 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:55.104 "strip_size_kb": 0, 00:27:55.104 "state": "online", 00:27:55.104 "raid_level": "raid1", 00:27:55.104 "superblock": false, 00:27:55.104 "num_base_bdevs": 4, 00:27:55.104 "num_base_bdevs_discovered": 3, 00:27:55.104 "num_base_bdevs_operational": 3, 00:27:55.104 "base_bdevs_list": [ 00:27:55.104 { 00:27:55.104 "name": null, 00:27:55.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.104 "is_configured": false, 00:27:55.104 "data_offset": 0, 00:27:55.104 "data_size": 65536 00:27:55.104 }, 00:27:55.104 { 00:27:55.104 "name": "BaseBdev2", 00:27:55.104 "uuid": "663ef64c-66f6-5e57-b63b-db153046b750", 00:27:55.104 "is_configured": true, 00:27:55.105 "data_offset": 0, 00:27:55.105 "data_size": 65536 00:27:55.105 }, 00:27:55.105 { 00:27:55.105 "name": "BaseBdev3", 00:27:55.105 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:55.105 "is_configured": true, 00:27:55.105 "data_offset": 0, 00:27:55.105 "data_size": 65536 00:27:55.105 }, 00:27:55.105 { 00:27:55.105 "name": "BaseBdev4", 00:27:55.105 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:55.105 "is_configured": true, 00:27:55.105 "data_offset": 0, 00:27:55.105 "data_size": 65536 00:27:55.105 } 00:27:55.105 ] 00:27:55.105 }' 00:27:55.105 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:55.105 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:55.679 "name": "raid_bdev1", 00:27:55.679 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:55.679 "strip_size_kb": 0, 00:27:55.679 "state": "online", 00:27:55.679 "raid_level": "raid1", 00:27:55.679 "superblock": false, 00:27:55.679 "num_base_bdevs": 4, 00:27:55.679 "num_base_bdevs_discovered": 3, 00:27:55.679 "num_base_bdevs_operational": 3, 00:27:55.679 "base_bdevs_list": [ 00:27:55.679 { 00:27:55.679 "name": null, 00:27:55.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.679 "is_configured": false, 00:27:55.679 "data_offset": 0, 00:27:55.679 "data_size": 65536 00:27:55.679 }, 00:27:55.679 { 00:27:55.679 "name": "BaseBdev2", 00:27:55.679 "uuid": "663ef64c-66f6-5e57-b63b-db153046b750", 00:27:55.679 "is_configured": true, 00:27:55.679 "data_offset": 0, 00:27:55.679 "data_size": 65536 00:27:55.679 }, 00:27:55.679 { 00:27:55.679 "name": "BaseBdev3", 00:27:55.679 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:55.679 "is_configured": true, 00:27:55.679 "data_offset": 0, 00:27:55.679 "data_size": 65536 00:27:55.679 }, 00:27:55.679 { 00:27:55.679 "name": "BaseBdev4", 00:27:55.679 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:55.679 "is_configured": true, 00:27:55.679 "data_offset": 0, 00:27:55.679 "data_size": 65536 00:27:55.679 } 00:27:55.679 ] 00:27:55.679 }' 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.679 [2024-11-20 07:25:19.883951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:55.679 [2024-11-20 07:25:19.897647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.679 07:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:55.679 [2024-11-20 07:25:19.900511] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:57.055 "name": "raid_bdev1", 00:27:57.055 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:57.055 "strip_size_kb": 0, 00:27:57.055 "state": "online", 00:27:57.055 "raid_level": "raid1", 00:27:57.055 "superblock": false, 00:27:57.055 "num_base_bdevs": 4, 00:27:57.055 "num_base_bdevs_discovered": 4, 00:27:57.055 "num_base_bdevs_operational": 4, 00:27:57.055 "process": { 00:27:57.055 "type": "rebuild", 00:27:57.055 "target": "spare", 00:27:57.055 "progress": { 00:27:57.055 "blocks": 20480, 00:27:57.055 "percent": 31 00:27:57.055 } 00:27:57.055 }, 00:27:57.055 "base_bdevs_list": [ 00:27:57.055 { 00:27:57.055 "name": "spare", 00:27:57.055 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:57.055 "is_configured": true, 00:27:57.055 "data_offset": 0, 00:27:57.055 "data_size": 65536 00:27:57.055 }, 00:27:57.055 { 00:27:57.055 "name": "BaseBdev2", 00:27:57.055 "uuid": "663ef64c-66f6-5e57-b63b-db153046b750", 00:27:57.055 "is_configured": true, 00:27:57.055 "data_offset": 0, 00:27:57.055 "data_size": 65536 00:27:57.055 }, 00:27:57.055 { 00:27:57.055 "name": "BaseBdev3", 00:27:57.055 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:57.055 "is_configured": true, 00:27:57.055 "data_offset": 0, 00:27:57.055 "data_size": 65536 00:27:57.055 }, 00:27:57.055 { 00:27:57.055 "name": "BaseBdev4", 00:27:57.055 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:57.055 "is_configured": true, 00:27:57.055 "data_offset": 0, 00:27:57.055 "data_size": 65536 00:27:57.055 } 00:27:57.055 ] 00:27:57.055 }' 00:27:57.055 07:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.055 [2024-11-20 07:25:21.077838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:57.055 [2024-11-20 07:25:21.109742] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.055 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:57.055 "name": "raid_bdev1", 00:27:57.055 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:57.055 "strip_size_kb": 0, 00:27:57.055 "state": "online", 00:27:57.055 "raid_level": "raid1", 00:27:57.055 "superblock": false, 00:27:57.055 "num_base_bdevs": 4, 00:27:57.056 "num_base_bdevs_discovered": 3, 00:27:57.056 "num_base_bdevs_operational": 3, 00:27:57.056 "process": { 00:27:57.056 "type": "rebuild", 00:27:57.056 "target": "spare", 00:27:57.056 "progress": { 00:27:57.056 "blocks": 24576, 00:27:57.056 "percent": 37 00:27:57.056 } 00:27:57.056 }, 00:27:57.056 "base_bdevs_list": [ 00:27:57.056 { 00:27:57.056 "name": "spare", 00:27:57.056 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:57.056 "is_configured": true, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 }, 00:27:57.056 { 00:27:57.056 "name": null, 00:27:57.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.056 "is_configured": false, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 }, 00:27:57.056 { 00:27:57.056 "name": "BaseBdev3", 00:27:57.056 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:57.056 "is_configured": true, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 }, 00:27:57.056 { 00:27:57.056 "name": "BaseBdev4", 00:27:57.056 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:57.056 "is_configured": true, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 } 00:27:57.056 ] 00:27:57.056 }' 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=485 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:57.056 "name": "raid_bdev1", 00:27:57.056 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:57.056 "strip_size_kb": 0, 00:27:57.056 "state": "online", 00:27:57.056 "raid_level": "raid1", 00:27:57.056 "superblock": false, 00:27:57.056 "num_base_bdevs": 4, 00:27:57.056 "num_base_bdevs_discovered": 3, 00:27:57.056 "num_base_bdevs_operational": 3, 00:27:57.056 "process": { 00:27:57.056 "type": "rebuild", 00:27:57.056 "target": "spare", 00:27:57.056 "progress": { 00:27:57.056 "blocks": 26624, 00:27:57.056 "percent": 40 00:27:57.056 } 00:27:57.056 }, 00:27:57.056 "base_bdevs_list": [ 00:27:57.056 { 00:27:57.056 "name": "spare", 00:27:57.056 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:57.056 "is_configured": true, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 }, 00:27:57.056 { 00:27:57.056 "name": null, 00:27:57.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.056 "is_configured": false, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 }, 00:27:57.056 { 00:27:57.056 "name": "BaseBdev3", 00:27:57.056 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:57.056 "is_configured": true, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 }, 00:27:57.056 { 00:27:57.056 "name": "BaseBdev4", 00:27:57.056 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:57.056 "is_configured": true, 00:27:57.056 "data_offset": 0, 00:27:57.056 "data_size": 65536 00:27:57.056 } 00:27:57.056 ] 00:27:57.056 }' 00:27:57.056 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:57.315 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:57.315 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:57.315 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.315 07:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:58.252 "name": "raid_bdev1", 00:27:58.252 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:58.252 "strip_size_kb": 0, 00:27:58.252 "state": "online", 00:27:58.252 "raid_level": "raid1", 00:27:58.252 "superblock": false, 00:27:58.252 "num_base_bdevs": 4, 00:27:58.252 "num_base_bdevs_discovered": 3, 00:27:58.252 "num_base_bdevs_operational": 3, 00:27:58.252 "process": { 00:27:58.252 "type": "rebuild", 00:27:58.252 "target": "spare", 00:27:58.252 "progress": { 00:27:58.252 "blocks": 51200, 00:27:58.252 "percent": 78 00:27:58.252 } 00:27:58.252 }, 00:27:58.252 "base_bdevs_list": [ 00:27:58.252 { 00:27:58.252 "name": "spare", 00:27:58.252 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:58.252 "is_configured": true, 00:27:58.252 "data_offset": 0, 00:27:58.252 "data_size": 65536 00:27:58.252 }, 00:27:58.252 { 00:27:58.252 "name": null, 00:27:58.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.252 "is_configured": false, 00:27:58.252 "data_offset": 0, 00:27:58.252 "data_size": 65536 00:27:58.252 }, 00:27:58.252 { 00:27:58.252 "name": "BaseBdev3", 00:27:58.252 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:58.252 "is_configured": true, 00:27:58.252 "data_offset": 0, 00:27:58.252 "data_size": 65536 00:27:58.252 }, 00:27:58.252 { 00:27:58.252 "name": "BaseBdev4", 00:27:58.252 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:58.252 "is_configured": true, 00:27:58.252 "data_offset": 0, 00:27:58.252 "data_size": 65536 00:27:58.252 } 00:27:58.252 ] 00:27:58.252 }' 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.252 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:58.511 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:58.511 07:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:59.079 [2024-11-20 07:25:23.124758] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:59.079 [2024-11-20 07:25:23.125117] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:59.079 [2024-11-20 07:25:23.125198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.338 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:59.597 "name": "raid_bdev1", 00:27:59.597 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:59.597 "strip_size_kb": 0, 00:27:59.597 "state": "online", 00:27:59.597 "raid_level": "raid1", 00:27:59.597 "superblock": false, 00:27:59.597 "num_base_bdevs": 4, 00:27:59.597 "num_base_bdevs_discovered": 3, 00:27:59.597 "num_base_bdevs_operational": 3, 00:27:59.597 "base_bdevs_list": [ 00:27:59.597 { 00:27:59.597 "name": "spare", 00:27:59.597 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:59.597 "is_configured": true, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 }, 00:27:59.597 { 00:27:59.597 "name": null, 00:27:59.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.597 "is_configured": false, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 }, 00:27:59.597 { 00:27:59.597 "name": "BaseBdev3", 00:27:59.597 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:59.597 "is_configured": true, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 }, 00:27:59.597 { 00:27:59.597 "name": "BaseBdev4", 00:27:59.597 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:59.597 "is_configured": true, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 } 00:27:59.597 ] 00:27:59.597 }' 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:59.597 "name": "raid_bdev1", 00:27:59.597 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:59.597 "strip_size_kb": 0, 00:27:59.597 "state": "online", 00:27:59.597 "raid_level": "raid1", 00:27:59.597 "superblock": false, 00:27:59.597 "num_base_bdevs": 4, 00:27:59.597 "num_base_bdevs_discovered": 3, 00:27:59.597 "num_base_bdevs_operational": 3, 00:27:59.597 "base_bdevs_list": [ 00:27:59.597 { 00:27:59.597 "name": "spare", 00:27:59.597 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:59.597 "is_configured": true, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 }, 00:27:59.597 { 00:27:59.597 "name": null, 00:27:59.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.597 "is_configured": false, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 }, 00:27:59.597 { 00:27:59.597 "name": "BaseBdev3", 00:27:59.597 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:59.597 "is_configured": true, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 }, 00:27:59.597 { 00:27:59.597 "name": "BaseBdev4", 00:27:59.597 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:59.597 "is_configured": true, 00:27:59.597 "data_offset": 0, 00:27:59.597 "data_size": 65536 00:27:59.597 } 00:27:59.597 ] 00:27:59.597 }' 00:27:59.597 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:59.598 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:59.598 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:59.856 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:59.857 "name": "raid_bdev1", 00:27:59.857 "uuid": "c9c80b06-e8b0-49d1-af37-558494104e68", 00:27:59.857 "strip_size_kb": 0, 00:27:59.857 "state": "online", 00:27:59.857 "raid_level": "raid1", 00:27:59.857 "superblock": false, 00:27:59.857 "num_base_bdevs": 4, 00:27:59.857 "num_base_bdevs_discovered": 3, 00:27:59.857 "num_base_bdevs_operational": 3, 00:27:59.857 "base_bdevs_list": [ 00:27:59.857 { 00:27:59.857 "name": "spare", 00:27:59.857 "uuid": "2a8c6a3a-020a-58fc-8a97-c46f601623a0", 00:27:59.857 "is_configured": true, 00:27:59.857 "data_offset": 0, 00:27:59.857 "data_size": 65536 00:27:59.857 }, 00:27:59.857 { 00:27:59.857 "name": null, 00:27:59.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.857 "is_configured": false, 00:27:59.857 "data_offset": 0, 00:27:59.857 "data_size": 65536 00:27:59.857 }, 00:27:59.857 { 00:27:59.857 "name": "BaseBdev3", 00:27:59.857 "uuid": "c9385f23-dcce-5c39-9cc7-40990f920b3e", 00:27:59.857 "is_configured": true, 00:27:59.857 "data_offset": 0, 00:27:59.857 "data_size": 65536 00:27:59.857 }, 00:27:59.857 { 00:27:59.857 "name": "BaseBdev4", 00:27:59.857 "uuid": "676ac393-9195-5467-82c9-e9c6f3b82f15", 00:27:59.857 "is_configured": true, 00:27:59.857 "data_offset": 0, 00:27:59.857 "data_size": 65536 00:27:59.857 } 00:27:59.857 ] 00:27:59.857 }' 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:59.857 07:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.472 [2024-11-20 07:25:24.451404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:00.472 [2024-11-20 07:25:24.451442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:00.472 [2024-11-20 07:25:24.451536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:00.472 [2024-11-20 07:25:24.451673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:00.472 [2024-11-20 07:25:24.451692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.472 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:00.731 /dev/nbd0 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.731 1+0 records in 00:28:00.731 1+0 records out 00:28:00.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366871 s, 11.2 MB/s 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.731 07:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:00.990 /dev/nbd1 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.990 1+0 records in 00:28:00.990 1+0 records out 00:28:00.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354261 s, 11.6 MB/s 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.990 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:28:00.991 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.991 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:00.991 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:28:00.991 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.991 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.991 07:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:01.249 07:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:01.249 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:01.249 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.250 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:01.250 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:01.250 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.250 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:01.508 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:01.508 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:01.508 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:01.508 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.509 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.509 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:01.509 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:01.509 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.509 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.509 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78008 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78008 ']' 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78008 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78008 00:28:01.767 killing process with pid 78008 00:28:01.767 Received shutdown signal, test time was about 60.000000 seconds 00:28:01.767 00:28:01.767 Latency(us) 00:28:01.767 [2024-11-20T07:25:26.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.767 [2024-11-20T07:25:26.056Z] =================================================================================================================== 00:28:01.767 [2024-11-20T07:25:26.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78008' 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78008 00:28:01.767 [2024-11-20 07:25:25.954850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:01.767 07:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78008 00:28:02.334 [2024-11-20 07:25:26.350763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:03.272 ************************************ 00:28:03.272 END TEST raid_rebuild_test 00:28:03.272 ************************************ 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:28:03.272 00:28:03.272 real 0m20.586s 00:28:03.272 user 0m23.136s 00:28:03.272 sys 0m3.439s 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.272 07:25:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:28:03.272 07:25:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:03.272 07:25:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.272 07:25:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:03.272 ************************************ 00:28:03.272 START TEST raid_rebuild_test_sb 00:28:03.272 ************************************ 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78488 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:03.272 07:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78488 00:28:03.273 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78488 ']' 00:28:03.273 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.273 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.273 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.273 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.273 07:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:03.273 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:03.273 Zero copy mechanism will not be used. 00:28:03.273 [2024-11-20 07:25:27.488534] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:03.273 [2024-11-20 07:25:27.488744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78488 ] 00:28:03.532 [2024-11-20 07:25:27.675856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.532 [2024-11-20 07:25:27.798730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.791 [2024-11-20 07:25:27.987964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:03.791 [2024-11-20 07:25:27.988030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.359 BaseBdev1_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.359 [2024-11-20 07:25:28.515365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:04.359 [2024-11-20 07:25:28.515662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.359 [2024-11-20 07:25:28.515716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:04.359 [2024-11-20 07:25:28.515736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.359 [2024-11-20 07:25:28.518548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.359 [2024-11-20 07:25:28.518769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:04.359 BaseBdev1 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.359 BaseBdev2_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.359 [2024-11-20 07:25:28.570671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:04.359 [2024-11-20 07:25:28.570769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.359 [2024-11-20 07:25:28.570794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:04.359 [2024-11-20 07:25:28.570812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.359 [2024-11-20 07:25:28.573458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.359 [2024-11-20 07:25:28.573522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:04.359 BaseBdev2 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.359 BaseBdev3_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.359 [2024-11-20 07:25:28.634375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:04.359 [2024-11-20 07:25:28.634484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.359 [2024-11-20 07:25:28.634516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:04.359 [2024-11-20 07:25:28.634534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.359 [2024-11-20 07:25:28.637648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.359 [2024-11-20 07:25:28.637769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:04.359 BaseBdev3 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.359 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 BaseBdev4_malloc 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 [2024-11-20 07:25:28.685759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:04.619 [2024-11-20 07:25:28.685845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.619 [2024-11-20 07:25:28.685876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:04.619 [2024-11-20 07:25:28.685895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.619 [2024-11-20 07:25:28.688893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.619 [2024-11-20 07:25:28.688960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:04.619 BaseBdev4 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 spare_malloc 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 spare_delay 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 [2024-11-20 07:25:28.748770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:04.619 [2024-11-20 07:25:28.748857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.619 [2024-11-20 07:25:28.748896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:04.619 [2024-11-20 07:25:28.748918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.619 [2024-11-20 07:25:28.751715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.619 [2024-11-20 07:25:28.751775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:04.619 spare 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 [2024-11-20 07:25:28.756834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:04.619 [2024-11-20 07:25:28.759456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:04.619 [2024-11-20 07:25:28.759550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:04.619 [2024-11-20 07:25:28.759656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:04.619 [2024-11-20 07:25:28.759896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:04.619 [2024-11-20 07:25:28.759930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:04.619 [2024-11-20 07:25:28.760264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:04.619 [2024-11-20 07:25:28.760501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:04.619 [2024-11-20 07:25:28.760522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:04.619 [2024-11-20 07:25:28.760813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.619 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.619 "name": "raid_bdev1", 00:28:04.619 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:04.619 "strip_size_kb": 0, 00:28:04.619 "state": "online", 00:28:04.619 "raid_level": "raid1", 00:28:04.619 "superblock": true, 00:28:04.619 "num_base_bdevs": 4, 00:28:04.619 "num_base_bdevs_discovered": 4, 00:28:04.619 "num_base_bdevs_operational": 4, 00:28:04.619 "base_bdevs_list": [ 00:28:04.619 { 00:28:04.619 "name": "BaseBdev1", 00:28:04.619 "uuid": "ba9962a4-1b30-50f7-9925-18da5b85d5ec", 00:28:04.619 "is_configured": true, 00:28:04.619 "data_offset": 2048, 00:28:04.619 "data_size": 63488 00:28:04.619 }, 00:28:04.619 { 00:28:04.619 "name": "BaseBdev2", 00:28:04.619 "uuid": "9b8c0db5-b04e-52e7-ab79-370acb4a6a38", 00:28:04.619 "is_configured": true, 00:28:04.619 "data_offset": 2048, 00:28:04.619 "data_size": 63488 00:28:04.619 }, 00:28:04.619 { 00:28:04.619 "name": "BaseBdev3", 00:28:04.619 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:04.620 "is_configured": true, 00:28:04.620 "data_offset": 2048, 00:28:04.620 "data_size": 63488 00:28:04.620 }, 00:28:04.620 { 00:28:04.620 "name": "BaseBdev4", 00:28:04.620 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:04.620 "is_configured": true, 00:28:04.620 "data_offset": 2048, 00:28:04.620 "data_size": 63488 00:28:04.620 } 00:28:04.620 ] 00:28:04.620 }' 00:28:04.620 07:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.620 07:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.187 [2024-11-20 07:25:29.273443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:05.187 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:05.445 [2024-11-20 07:25:29.609204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:05.445 /dev/nbd0 00:28:05.445 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:05.445 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:05.445 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:05.445 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:05.445 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:05.446 1+0 records in 00:28:05.446 1+0 records out 00:28:05.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436239 s, 9.4 MB/s 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:05.446 07:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:13.560 63488+0 records in 00:28:13.560 63488+0 records out 00:28:13.560 32505856 bytes (33 MB, 31 MiB) copied, 7.43234 s, 4.4 MB/s 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:13.560 [2024-11-20 07:25:37.377315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.560 [2024-11-20 07:25:37.409337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.560 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.561 "name": "raid_bdev1", 00:28:13.561 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:13.561 "strip_size_kb": 0, 00:28:13.561 "state": "online", 00:28:13.561 "raid_level": "raid1", 00:28:13.561 "superblock": true, 00:28:13.561 "num_base_bdevs": 4, 00:28:13.561 "num_base_bdevs_discovered": 3, 00:28:13.561 "num_base_bdevs_operational": 3, 00:28:13.561 "base_bdevs_list": [ 00:28:13.561 { 00:28:13.561 "name": null, 00:28:13.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.561 "is_configured": false, 00:28:13.561 "data_offset": 0, 00:28:13.561 "data_size": 63488 00:28:13.561 }, 00:28:13.561 { 00:28:13.561 "name": "BaseBdev2", 00:28:13.561 "uuid": "9b8c0db5-b04e-52e7-ab79-370acb4a6a38", 00:28:13.561 "is_configured": true, 00:28:13.561 "data_offset": 2048, 00:28:13.561 "data_size": 63488 00:28:13.561 }, 00:28:13.561 { 00:28:13.561 "name": "BaseBdev3", 00:28:13.561 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:13.561 "is_configured": true, 00:28:13.561 "data_offset": 2048, 00:28:13.561 "data_size": 63488 00:28:13.561 }, 00:28:13.561 { 00:28:13.561 "name": "BaseBdev4", 00:28:13.561 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:13.561 "is_configured": true, 00:28:13.561 "data_offset": 2048, 00:28:13.561 "data_size": 63488 00:28:13.561 } 00:28:13.561 ] 00:28:13.561 }' 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.561 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.819 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:13.819 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.819 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.819 [2024-11-20 07:25:37.905500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:13.819 [2024-11-20 07:25:37.919519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:28:13.819 07:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.819 07:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:13.819 [2024-11-20 07:25:37.922216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:14.755 "name": "raid_bdev1", 00:28:14.755 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:14.755 "strip_size_kb": 0, 00:28:14.755 "state": "online", 00:28:14.755 "raid_level": "raid1", 00:28:14.755 "superblock": true, 00:28:14.755 "num_base_bdevs": 4, 00:28:14.755 "num_base_bdevs_discovered": 4, 00:28:14.755 "num_base_bdevs_operational": 4, 00:28:14.755 "process": { 00:28:14.755 "type": "rebuild", 00:28:14.755 "target": "spare", 00:28:14.755 "progress": { 00:28:14.755 "blocks": 20480, 00:28:14.755 "percent": 32 00:28:14.755 } 00:28:14.755 }, 00:28:14.755 "base_bdevs_list": [ 00:28:14.755 { 00:28:14.755 "name": "spare", 00:28:14.755 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:14.755 "is_configured": true, 00:28:14.755 "data_offset": 2048, 00:28:14.755 "data_size": 63488 00:28:14.755 }, 00:28:14.755 { 00:28:14.755 "name": "BaseBdev2", 00:28:14.755 "uuid": "9b8c0db5-b04e-52e7-ab79-370acb4a6a38", 00:28:14.755 "is_configured": true, 00:28:14.755 "data_offset": 2048, 00:28:14.755 "data_size": 63488 00:28:14.755 }, 00:28:14.755 { 00:28:14.755 "name": "BaseBdev3", 00:28:14.755 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:14.755 "is_configured": true, 00:28:14.755 "data_offset": 2048, 00:28:14.755 "data_size": 63488 00:28:14.755 }, 00:28:14.755 { 00:28:14.755 "name": "BaseBdev4", 00:28:14.755 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:14.755 "is_configured": true, 00:28:14.755 "data_offset": 2048, 00:28:14.755 "data_size": 63488 00:28:14.755 } 00:28:14.755 ] 00:28:14.755 }' 00:28:14.755 07:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:14.755 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.755 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.014 [2024-11-20 07:25:39.095890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.014 [2024-11-20 07:25:39.130989] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:15.014 [2024-11-20 07:25:39.131071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.014 [2024-11-20 07:25:39.131097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.014 [2024-11-20 07:25:39.131112] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.014 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.014 "name": "raid_bdev1", 00:28:15.014 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:15.014 "strip_size_kb": 0, 00:28:15.014 "state": "online", 00:28:15.014 "raid_level": "raid1", 00:28:15.014 "superblock": true, 00:28:15.014 "num_base_bdevs": 4, 00:28:15.014 "num_base_bdevs_discovered": 3, 00:28:15.014 "num_base_bdevs_operational": 3, 00:28:15.014 "base_bdevs_list": [ 00:28:15.014 { 00:28:15.014 "name": null, 00:28:15.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.015 "is_configured": false, 00:28:15.015 "data_offset": 0, 00:28:15.015 "data_size": 63488 00:28:15.015 }, 00:28:15.015 { 00:28:15.015 "name": "BaseBdev2", 00:28:15.015 "uuid": "9b8c0db5-b04e-52e7-ab79-370acb4a6a38", 00:28:15.015 "is_configured": true, 00:28:15.015 "data_offset": 2048, 00:28:15.015 "data_size": 63488 00:28:15.015 }, 00:28:15.015 { 00:28:15.015 "name": "BaseBdev3", 00:28:15.015 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:15.015 "is_configured": true, 00:28:15.015 "data_offset": 2048, 00:28:15.015 "data_size": 63488 00:28:15.015 }, 00:28:15.015 { 00:28:15.015 "name": "BaseBdev4", 00:28:15.015 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:15.015 "is_configured": true, 00:28:15.015 "data_offset": 2048, 00:28:15.015 "data_size": 63488 00:28:15.015 } 00:28:15.015 ] 00:28:15.015 }' 00:28:15.015 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.015 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.580 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:15.580 "name": "raid_bdev1", 00:28:15.580 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:15.580 "strip_size_kb": 0, 00:28:15.580 "state": "online", 00:28:15.580 "raid_level": "raid1", 00:28:15.580 "superblock": true, 00:28:15.580 "num_base_bdevs": 4, 00:28:15.580 "num_base_bdevs_discovered": 3, 00:28:15.580 "num_base_bdevs_operational": 3, 00:28:15.580 "base_bdevs_list": [ 00:28:15.580 { 00:28:15.580 "name": null, 00:28:15.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.580 "is_configured": false, 00:28:15.580 "data_offset": 0, 00:28:15.580 "data_size": 63488 00:28:15.580 }, 00:28:15.580 { 00:28:15.580 "name": "BaseBdev2", 00:28:15.580 "uuid": "9b8c0db5-b04e-52e7-ab79-370acb4a6a38", 00:28:15.580 "is_configured": true, 00:28:15.580 "data_offset": 2048, 00:28:15.580 "data_size": 63488 00:28:15.580 }, 00:28:15.580 { 00:28:15.581 "name": "BaseBdev3", 00:28:15.581 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:15.581 "is_configured": true, 00:28:15.581 "data_offset": 2048, 00:28:15.581 "data_size": 63488 00:28:15.581 }, 00:28:15.581 { 00:28:15.581 "name": "BaseBdev4", 00:28:15.581 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:15.581 "is_configured": true, 00:28:15.581 "data_offset": 2048, 00:28:15.581 "data_size": 63488 00:28:15.581 } 00:28:15.581 ] 00:28:15.581 }' 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.581 [2024-11-20 07:25:39.841443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:15.581 [2024-11-20 07:25:39.855537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.581 07:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:15.581 [2024-11-20 07:25:39.858143] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:16.955 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:16.956 "name": "raid_bdev1", 00:28:16.956 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:16.956 "strip_size_kb": 0, 00:28:16.956 "state": "online", 00:28:16.956 "raid_level": "raid1", 00:28:16.956 "superblock": true, 00:28:16.956 "num_base_bdevs": 4, 00:28:16.956 "num_base_bdevs_discovered": 4, 00:28:16.956 "num_base_bdevs_operational": 4, 00:28:16.956 "process": { 00:28:16.956 "type": "rebuild", 00:28:16.956 "target": "spare", 00:28:16.956 "progress": { 00:28:16.956 "blocks": 20480, 00:28:16.956 "percent": 32 00:28:16.956 } 00:28:16.956 }, 00:28:16.956 "base_bdevs_list": [ 00:28:16.956 { 00:28:16.956 "name": "spare", 00:28:16.956 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 }, 00:28:16.956 { 00:28:16.956 "name": "BaseBdev2", 00:28:16.956 "uuid": "9b8c0db5-b04e-52e7-ab79-370acb4a6a38", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 }, 00:28:16.956 { 00:28:16.956 "name": "BaseBdev3", 00:28:16.956 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 }, 00:28:16.956 { 00:28:16.956 "name": "BaseBdev4", 00:28:16.956 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 } 00:28:16.956 ] 00:28:16.956 }' 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:16.956 07:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:16.956 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.956 [2024-11-20 07:25:41.039441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:16.956 [2024-11-20 07:25:41.166893] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:16.956 "name": "raid_bdev1", 00:28:16.956 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:16.956 "strip_size_kb": 0, 00:28:16.956 "state": "online", 00:28:16.956 "raid_level": "raid1", 00:28:16.956 "superblock": true, 00:28:16.956 "num_base_bdevs": 4, 00:28:16.956 "num_base_bdevs_discovered": 3, 00:28:16.956 "num_base_bdevs_operational": 3, 00:28:16.956 "process": { 00:28:16.956 "type": "rebuild", 00:28:16.956 "target": "spare", 00:28:16.956 "progress": { 00:28:16.956 "blocks": 24576, 00:28:16.956 "percent": 38 00:28:16.956 } 00:28:16.956 }, 00:28:16.956 "base_bdevs_list": [ 00:28:16.956 { 00:28:16.956 "name": "spare", 00:28:16.956 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 }, 00:28:16.956 { 00:28:16.956 "name": null, 00:28:16.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.956 "is_configured": false, 00:28:16.956 "data_offset": 0, 00:28:16.956 "data_size": 63488 00:28:16.956 }, 00:28:16.956 { 00:28:16.956 "name": "BaseBdev3", 00:28:16.956 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 }, 00:28:16.956 { 00:28:16.956 "name": "BaseBdev4", 00:28:16.956 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:16.956 "is_configured": true, 00:28:16.956 "data_offset": 2048, 00:28:16.956 "data_size": 63488 00:28:16.956 } 00:28:16.956 ] 00:28:16.956 }' 00:28:16.956 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:17.214 "name": "raid_bdev1", 00:28:17.214 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:17.214 "strip_size_kb": 0, 00:28:17.214 "state": "online", 00:28:17.214 "raid_level": "raid1", 00:28:17.214 "superblock": true, 00:28:17.214 "num_base_bdevs": 4, 00:28:17.214 "num_base_bdevs_discovered": 3, 00:28:17.214 "num_base_bdevs_operational": 3, 00:28:17.214 "process": { 00:28:17.214 "type": "rebuild", 00:28:17.214 "target": "spare", 00:28:17.214 "progress": { 00:28:17.214 "blocks": 26624, 00:28:17.214 "percent": 41 00:28:17.214 } 00:28:17.214 }, 00:28:17.214 "base_bdevs_list": [ 00:28:17.214 { 00:28:17.214 "name": "spare", 00:28:17.214 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:17.214 "is_configured": true, 00:28:17.214 "data_offset": 2048, 00:28:17.214 "data_size": 63488 00:28:17.214 }, 00:28:17.214 { 00:28:17.214 "name": null, 00:28:17.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.214 "is_configured": false, 00:28:17.214 "data_offset": 0, 00:28:17.214 "data_size": 63488 00:28:17.214 }, 00:28:17.214 { 00:28:17.214 "name": "BaseBdev3", 00:28:17.214 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:17.214 "is_configured": true, 00:28:17.214 "data_offset": 2048, 00:28:17.214 "data_size": 63488 00:28:17.214 }, 00:28:17.214 { 00:28:17.214 "name": "BaseBdev4", 00:28:17.214 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:17.214 "is_configured": true, 00:28:17.214 "data_offset": 2048, 00:28:17.214 "data_size": 63488 00:28:17.214 } 00:28:17.214 ] 00:28:17.214 }' 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.214 07:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:18.592 "name": "raid_bdev1", 00:28:18.592 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:18.592 "strip_size_kb": 0, 00:28:18.592 "state": "online", 00:28:18.592 "raid_level": "raid1", 00:28:18.592 "superblock": true, 00:28:18.592 "num_base_bdevs": 4, 00:28:18.592 "num_base_bdevs_discovered": 3, 00:28:18.592 "num_base_bdevs_operational": 3, 00:28:18.592 "process": { 00:28:18.592 "type": "rebuild", 00:28:18.592 "target": "spare", 00:28:18.592 "progress": { 00:28:18.592 "blocks": 51200, 00:28:18.592 "percent": 80 00:28:18.592 } 00:28:18.592 }, 00:28:18.592 "base_bdevs_list": [ 00:28:18.592 { 00:28:18.592 "name": "spare", 00:28:18.592 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:18.592 "is_configured": true, 00:28:18.592 "data_offset": 2048, 00:28:18.592 "data_size": 63488 00:28:18.592 }, 00:28:18.592 { 00:28:18.592 "name": null, 00:28:18.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.592 "is_configured": false, 00:28:18.592 "data_offset": 0, 00:28:18.592 "data_size": 63488 00:28:18.592 }, 00:28:18.592 { 00:28:18.592 "name": "BaseBdev3", 00:28:18.592 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:18.592 "is_configured": true, 00:28:18.592 "data_offset": 2048, 00:28:18.592 "data_size": 63488 00:28:18.592 }, 00:28:18.592 { 00:28:18.592 "name": "BaseBdev4", 00:28:18.592 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:18.592 "is_configured": true, 00:28:18.592 "data_offset": 2048, 00:28:18.592 "data_size": 63488 00:28:18.592 } 00:28:18.592 ] 00:28:18.592 }' 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:18.592 07:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:18.851 [2024-11-20 07:25:43.080236] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:18.851 [2024-11-20 07:25:43.080316] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:18.851 [2024-11-20 07:25:43.080475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.419 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.677 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:19.677 "name": "raid_bdev1", 00:28:19.677 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:19.677 "strip_size_kb": 0, 00:28:19.677 "state": "online", 00:28:19.677 "raid_level": "raid1", 00:28:19.677 "superblock": true, 00:28:19.677 "num_base_bdevs": 4, 00:28:19.677 "num_base_bdevs_discovered": 3, 00:28:19.677 "num_base_bdevs_operational": 3, 00:28:19.677 "base_bdevs_list": [ 00:28:19.677 { 00:28:19.677 "name": "spare", 00:28:19.677 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:19.677 "is_configured": true, 00:28:19.677 "data_offset": 2048, 00:28:19.677 "data_size": 63488 00:28:19.677 }, 00:28:19.677 { 00:28:19.677 "name": null, 00:28:19.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.677 "is_configured": false, 00:28:19.677 "data_offset": 0, 00:28:19.677 "data_size": 63488 00:28:19.677 }, 00:28:19.677 { 00:28:19.677 "name": "BaseBdev3", 00:28:19.677 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:19.677 "is_configured": true, 00:28:19.677 "data_offset": 2048, 00:28:19.677 "data_size": 63488 00:28:19.677 }, 00:28:19.678 { 00:28:19.678 "name": "BaseBdev4", 00:28:19.678 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:19.678 "is_configured": true, 00:28:19.678 "data_offset": 2048, 00:28:19.678 "data_size": 63488 00:28:19.678 } 00:28:19.678 ] 00:28:19.678 }' 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:19.678 "name": "raid_bdev1", 00:28:19.678 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:19.678 "strip_size_kb": 0, 00:28:19.678 "state": "online", 00:28:19.678 "raid_level": "raid1", 00:28:19.678 "superblock": true, 00:28:19.678 "num_base_bdevs": 4, 00:28:19.678 "num_base_bdevs_discovered": 3, 00:28:19.678 "num_base_bdevs_operational": 3, 00:28:19.678 "base_bdevs_list": [ 00:28:19.678 { 00:28:19.678 "name": "spare", 00:28:19.678 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:19.678 "is_configured": true, 00:28:19.678 "data_offset": 2048, 00:28:19.678 "data_size": 63488 00:28:19.678 }, 00:28:19.678 { 00:28:19.678 "name": null, 00:28:19.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.678 "is_configured": false, 00:28:19.678 "data_offset": 0, 00:28:19.678 "data_size": 63488 00:28:19.678 }, 00:28:19.678 { 00:28:19.678 "name": "BaseBdev3", 00:28:19.678 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:19.678 "is_configured": true, 00:28:19.678 "data_offset": 2048, 00:28:19.678 "data_size": 63488 00:28:19.678 }, 00:28:19.678 { 00:28:19.678 "name": "BaseBdev4", 00:28:19.678 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:19.678 "is_configured": true, 00:28:19.678 "data_offset": 2048, 00:28:19.678 "data_size": 63488 00:28:19.678 } 00:28:19.678 ] 00:28:19.678 }' 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:19.678 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.940 07:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.940 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.940 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.940 "name": "raid_bdev1", 00:28:19.940 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:19.940 "strip_size_kb": 0, 00:28:19.940 "state": "online", 00:28:19.940 "raid_level": "raid1", 00:28:19.940 "superblock": true, 00:28:19.940 "num_base_bdevs": 4, 00:28:19.940 "num_base_bdevs_discovered": 3, 00:28:19.940 "num_base_bdevs_operational": 3, 00:28:19.940 "base_bdevs_list": [ 00:28:19.940 { 00:28:19.940 "name": "spare", 00:28:19.940 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:19.940 "is_configured": true, 00:28:19.940 "data_offset": 2048, 00:28:19.940 "data_size": 63488 00:28:19.940 }, 00:28:19.940 { 00:28:19.940 "name": null, 00:28:19.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.940 "is_configured": false, 00:28:19.940 "data_offset": 0, 00:28:19.940 "data_size": 63488 00:28:19.940 }, 00:28:19.940 { 00:28:19.940 "name": "BaseBdev3", 00:28:19.940 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:19.940 "is_configured": true, 00:28:19.940 "data_offset": 2048, 00:28:19.940 "data_size": 63488 00:28:19.940 }, 00:28:19.940 { 00:28:19.940 "name": "BaseBdev4", 00:28:19.940 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:19.940 "is_configured": true, 00:28:19.940 "data_offset": 2048, 00:28:19.940 "data_size": 63488 00:28:19.940 } 00:28:19.940 ] 00:28:19.940 }' 00:28:19.940 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.940 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.522 [2024-11-20 07:25:44.519713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:20.522 [2024-11-20 07:25:44.519900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:20.522 [2024-11-20 07:25:44.520141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:20.522 [2024-11-20 07:25:44.520354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:20.522 [2024-11-20 07:25:44.520381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:20.522 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:20.779 /dev/nbd0 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:20.779 1+0 records in 00:28:20.779 1+0 records out 00:28:20.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219938 s, 18.6 MB/s 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:20.779 07:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:21.037 /dev/nbd1 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:21.037 1+0 records in 00:28:21.037 1+0 records out 00:28:21.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404336 s, 10.1 MB/s 00:28:21.037 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:21.038 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:21.296 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:21.552 07:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.810 [2024-11-20 07:25:46.036512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:21.810 [2024-11-20 07:25:46.036568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.810 [2024-11-20 07:25:46.036641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:21.810 [2024-11-20 07:25:46.036658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.810 [2024-11-20 07:25:46.039787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.810 [2024-11-20 07:25:46.039828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:21.810 [2024-11-20 07:25:46.039947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:21.810 [2024-11-20 07:25:46.040019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:21.810 [2024-11-20 07:25:46.040169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:21.810 [2024-11-20 07:25:46.040335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:21.810 spare 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.810 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.069 [2024-11-20 07:25:46.140469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:22.069 [2024-11-20 07:25:46.140510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:22.069 [2024-11-20 07:25:46.140951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:28:22.069 [2024-11-20 07:25:46.141200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:22.069 [2024-11-20 07:25:46.141219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:22.069 [2024-11-20 07:25:46.141511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:22.069 "name": "raid_bdev1", 00:28:22.069 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:22.069 "strip_size_kb": 0, 00:28:22.069 "state": "online", 00:28:22.069 "raid_level": "raid1", 00:28:22.069 "superblock": true, 00:28:22.069 "num_base_bdevs": 4, 00:28:22.069 "num_base_bdevs_discovered": 3, 00:28:22.069 "num_base_bdevs_operational": 3, 00:28:22.069 "base_bdevs_list": [ 00:28:22.069 { 00:28:22.069 "name": "spare", 00:28:22.069 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:22.069 "is_configured": true, 00:28:22.069 "data_offset": 2048, 00:28:22.069 "data_size": 63488 00:28:22.069 }, 00:28:22.069 { 00:28:22.069 "name": null, 00:28:22.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.069 "is_configured": false, 00:28:22.069 "data_offset": 2048, 00:28:22.069 "data_size": 63488 00:28:22.069 }, 00:28:22.069 { 00:28:22.069 "name": "BaseBdev3", 00:28:22.069 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:22.069 "is_configured": true, 00:28:22.069 "data_offset": 2048, 00:28:22.069 "data_size": 63488 00:28:22.069 }, 00:28:22.069 { 00:28:22.069 "name": "BaseBdev4", 00:28:22.069 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:22.069 "is_configured": true, 00:28:22.069 "data_offset": 2048, 00:28:22.069 "data_size": 63488 00:28:22.069 } 00:28:22.069 ] 00:28:22.069 }' 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:22.069 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:22.635 "name": "raid_bdev1", 00:28:22.635 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:22.635 "strip_size_kb": 0, 00:28:22.635 "state": "online", 00:28:22.635 "raid_level": "raid1", 00:28:22.635 "superblock": true, 00:28:22.635 "num_base_bdevs": 4, 00:28:22.635 "num_base_bdevs_discovered": 3, 00:28:22.635 "num_base_bdevs_operational": 3, 00:28:22.635 "base_bdevs_list": [ 00:28:22.635 { 00:28:22.635 "name": "spare", 00:28:22.635 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:22.635 "is_configured": true, 00:28:22.635 "data_offset": 2048, 00:28:22.635 "data_size": 63488 00:28:22.635 }, 00:28:22.635 { 00:28:22.635 "name": null, 00:28:22.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.635 "is_configured": false, 00:28:22.635 "data_offset": 2048, 00:28:22.635 "data_size": 63488 00:28:22.635 }, 00:28:22.635 { 00:28:22.635 "name": "BaseBdev3", 00:28:22.635 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:22.635 "is_configured": true, 00:28:22.635 "data_offset": 2048, 00:28:22.635 "data_size": 63488 00:28:22.635 }, 00:28:22.635 { 00:28:22.635 "name": "BaseBdev4", 00:28:22.635 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:22.635 "is_configured": true, 00:28:22.635 "data_offset": 2048, 00:28:22.635 "data_size": 63488 00:28:22.635 } 00:28:22.635 ] 00:28:22.635 }' 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 [2024-11-20 07:25:46.873588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.635 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.636 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.894 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:22.894 "name": "raid_bdev1", 00:28:22.894 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:22.894 "strip_size_kb": 0, 00:28:22.894 "state": "online", 00:28:22.894 "raid_level": "raid1", 00:28:22.894 "superblock": true, 00:28:22.894 "num_base_bdevs": 4, 00:28:22.894 "num_base_bdevs_discovered": 2, 00:28:22.894 "num_base_bdevs_operational": 2, 00:28:22.894 "base_bdevs_list": [ 00:28:22.894 { 00:28:22.894 "name": null, 00:28:22.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.894 "is_configured": false, 00:28:22.894 "data_offset": 0, 00:28:22.894 "data_size": 63488 00:28:22.894 }, 00:28:22.894 { 00:28:22.894 "name": null, 00:28:22.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.894 "is_configured": false, 00:28:22.894 "data_offset": 2048, 00:28:22.894 "data_size": 63488 00:28:22.894 }, 00:28:22.894 { 00:28:22.894 "name": "BaseBdev3", 00:28:22.894 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:22.894 "is_configured": true, 00:28:22.894 "data_offset": 2048, 00:28:22.894 "data_size": 63488 00:28:22.894 }, 00:28:22.894 { 00:28:22.894 "name": "BaseBdev4", 00:28:22.894 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:22.894 "is_configured": true, 00:28:22.894 "data_offset": 2048, 00:28:22.894 "data_size": 63488 00:28:22.894 } 00:28:22.894 ] 00:28:22.894 }' 00:28:22.894 07:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:22.894 07:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.152 07:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:23.152 07:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.152 07:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.152 [2024-11-20 07:25:47.397836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:23.152 [2024-11-20 07:25:47.398115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:28:23.152 [2024-11-20 07:25:47.398136] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:23.152 [2024-11-20 07:25:47.398197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:23.152 [2024-11-20 07:25:47.410515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:28:23.152 07:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.152 07:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:23.152 [2024-11-20 07:25:47.413178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:24.528 "name": "raid_bdev1", 00:28:24.528 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:24.528 "strip_size_kb": 0, 00:28:24.528 "state": "online", 00:28:24.528 "raid_level": "raid1", 00:28:24.528 "superblock": true, 00:28:24.528 "num_base_bdevs": 4, 00:28:24.528 "num_base_bdevs_discovered": 3, 00:28:24.528 "num_base_bdevs_operational": 3, 00:28:24.528 "process": { 00:28:24.528 "type": "rebuild", 00:28:24.528 "target": "spare", 00:28:24.528 "progress": { 00:28:24.528 "blocks": 20480, 00:28:24.528 "percent": 32 00:28:24.528 } 00:28:24.528 }, 00:28:24.528 "base_bdevs_list": [ 00:28:24.528 { 00:28:24.528 "name": "spare", 00:28:24.528 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:24.528 "is_configured": true, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 }, 00:28:24.528 { 00:28:24.528 "name": null, 00:28:24.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.528 "is_configured": false, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 }, 00:28:24.528 { 00:28:24.528 "name": "BaseBdev3", 00:28:24.528 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:24.528 "is_configured": true, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 }, 00:28:24.528 { 00:28:24.528 "name": "BaseBdev4", 00:28:24.528 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:24.528 "is_configured": true, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 } 00:28:24.528 ] 00:28:24.528 }' 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:24.528 [2024-11-20 07:25:48.586581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:24.528 [2024-11-20 07:25:48.621615] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:24.528 [2024-11-20 07:25:48.621723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.528 [2024-11-20 07:25:48.621763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:24.528 [2024-11-20 07:25:48.621790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:24.528 "name": "raid_bdev1", 00:28:24.528 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:24.528 "strip_size_kb": 0, 00:28:24.528 "state": "online", 00:28:24.528 "raid_level": "raid1", 00:28:24.528 "superblock": true, 00:28:24.528 "num_base_bdevs": 4, 00:28:24.528 "num_base_bdevs_discovered": 2, 00:28:24.528 "num_base_bdevs_operational": 2, 00:28:24.528 "base_bdevs_list": [ 00:28:24.528 { 00:28:24.528 "name": null, 00:28:24.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.528 "is_configured": false, 00:28:24.528 "data_offset": 0, 00:28:24.528 "data_size": 63488 00:28:24.528 }, 00:28:24.528 { 00:28:24.528 "name": null, 00:28:24.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.528 "is_configured": false, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 }, 00:28:24.528 { 00:28:24.528 "name": "BaseBdev3", 00:28:24.528 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:24.528 "is_configured": true, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 }, 00:28:24.528 { 00:28:24.528 "name": "BaseBdev4", 00:28:24.528 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:24.528 "is_configured": true, 00:28:24.528 "data_offset": 2048, 00:28:24.528 "data_size": 63488 00:28:24.528 } 00:28:24.528 ] 00:28:24.528 }' 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:24.528 07:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 07:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:25.105 07:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.105 07:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 [2024-11-20 07:25:49.161124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:25.105 [2024-11-20 07:25:49.161209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.105 [2024-11-20 07:25:49.161242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:25.105 [2024-11-20 07:25:49.161256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.105 [2024-11-20 07:25:49.162025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.105 [2024-11-20 07:25:49.162062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:25.105 [2024-11-20 07:25:49.162202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:25.105 [2024-11-20 07:25:49.162220] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:28:25.105 [2024-11-20 07:25:49.162238] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:25.105 [2024-11-20 07:25:49.162275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:25.105 [2024-11-20 07:25:49.175866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:28:25.105 spare 00:28:25.105 07:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.105 07:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:28:25.105 [2024-11-20 07:25:49.178311] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:26.042 "name": "raid_bdev1", 00:28:26.042 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:26.042 "strip_size_kb": 0, 00:28:26.042 "state": "online", 00:28:26.042 "raid_level": "raid1", 00:28:26.042 "superblock": true, 00:28:26.042 "num_base_bdevs": 4, 00:28:26.042 "num_base_bdevs_discovered": 3, 00:28:26.042 "num_base_bdevs_operational": 3, 00:28:26.042 "process": { 00:28:26.042 "type": "rebuild", 00:28:26.042 "target": "spare", 00:28:26.042 "progress": { 00:28:26.042 "blocks": 20480, 00:28:26.042 "percent": 32 00:28:26.042 } 00:28:26.042 }, 00:28:26.042 "base_bdevs_list": [ 00:28:26.042 { 00:28:26.042 "name": "spare", 00:28:26.042 "uuid": "b82ad7b7-4e9c-589c-9dcd-f94d7e9bb37b", 00:28:26.042 "is_configured": true, 00:28:26.042 "data_offset": 2048, 00:28:26.042 "data_size": 63488 00:28:26.042 }, 00:28:26.042 { 00:28:26.042 "name": null, 00:28:26.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.042 "is_configured": false, 00:28:26.042 "data_offset": 2048, 00:28:26.042 "data_size": 63488 00:28:26.042 }, 00:28:26.042 { 00:28:26.042 "name": "BaseBdev3", 00:28:26.042 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:26.042 "is_configured": true, 00:28:26.042 "data_offset": 2048, 00:28:26.042 "data_size": 63488 00:28:26.042 }, 00:28:26.042 { 00:28:26.042 "name": "BaseBdev4", 00:28:26.042 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:26.042 "is_configured": true, 00:28:26.042 "data_offset": 2048, 00:28:26.042 "data_size": 63488 00:28:26.042 } 00:28:26.042 ] 00:28:26.042 }' 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:26.042 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.302 [2024-11-20 07:25:50.347882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:26.302 [2024-11-20 07:25:50.387061] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:26.302 [2024-11-20 07:25:50.387328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.302 [2024-11-20 07:25:50.387487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:26.302 [2024-11-20 07:25:50.387556] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.302 "name": "raid_bdev1", 00:28:26.302 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:26.302 "strip_size_kb": 0, 00:28:26.302 "state": "online", 00:28:26.302 "raid_level": "raid1", 00:28:26.302 "superblock": true, 00:28:26.302 "num_base_bdevs": 4, 00:28:26.302 "num_base_bdevs_discovered": 2, 00:28:26.302 "num_base_bdevs_operational": 2, 00:28:26.302 "base_bdevs_list": [ 00:28:26.302 { 00:28:26.302 "name": null, 00:28:26.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.302 "is_configured": false, 00:28:26.302 "data_offset": 0, 00:28:26.302 "data_size": 63488 00:28:26.302 }, 00:28:26.302 { 00:28:26.302 "name": null, 00:28:26.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.302 "is_configured": false, 00:28:26.302 "data_offset": 2048, 00:28:26.302 "data_size": 63488 00:28:26.302 }, 00:28:26.302 { 00:28:26.302 "name": "BaseBdev3", 00:28:26.302 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:26.302 "is_configured": true, 00:28:26.302 "data_offset": 2048, 00:28:26.302 "data_size": 63488 00:28:26.302 }, 00:28:26.302 { 00:28:26.302 "name": "BaseBdev4", 00:28:26.302 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:26.302 "is_configured": true, 00:28:26.302 "data_offset": 2048, 00:28:26.302 "data_size": 63488 00:28:26.302 } 00:28:26.302 ] 00:28:26.302 }' 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.302 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:26.895 "name": "raid_bdev1", 00:28:26.895 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:26.895 "strip_size_kb": 0, 00:28:26.895 "state": "online", 00:28:26.895 "raid_level": "raid1", 00:28:26.895 "superblock": true, 00:28:26.895 "num_base_bdevs": 4, 00:28:26.895 "num_base_bdevs_discovered": 2, 00:28:26.895 "num_base_bdevs_operational": 2, 00:28:26.895 "base_bdevs_list": [ 00:28:26.895 { 00:28:26.895 "name": null, 00:28:26.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.895 "is_configured": false, 00:28:26.895 "data_offset": 0, 00:28:26.895 "data_size": 63488 00:28:26.895 }, 00:28:26.895 { 00:28:26.895 "name": null, 00:28:26.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.895 "is_configured": false, 00:28:26.895 "data_offset": 2048, 00:28:26.895 "data_size": 63488 00:28:26.895 }, 00:28:26.895 { 00:28:26.895 "name": "BaseBdev3", 00:28:26.895 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:26.895 "is_configured": true, 00:28:26.895 "data_offset": 2048, 00:28:26.895 "data_size": 63488 00:28:26.895 }, 00:28:26.895 { 00:28:26.895 "name": "BaseBdev4", 00:28:26.895 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:26.895 "is_configured": true, 00:28:26.895 "data_offset": 2048, 00:28:26.895 "data_size": 63488 00:28:26.895 } 00:28:26.895 ] 00:28:26.895 }' 00:28:26.895 07:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.895 07:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.895 [2024-11-20 07:25:51.114527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:26.895 [2024-11-20 07:25:51.114632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.895 [2024-11-20 07:25:51.114662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:28:26.895 [2024-11-20 07:25:51.114679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.895 [2024-11-20 07:25:51.115342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.896 [2024-11-20 07:25:51.115408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:26.896 [2024-11-20 07:25:51.115519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:26.896 [2024-11-20 07:25:51.115543] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:28:26.896 [2024-11-20 07:25:51.115555] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:26.896 [2024-11-20 07:25:51.115583] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:26.896 BaseBdev1 00:28:26.896 07:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.896 07:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:28.271 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:28.272 "name": "raid_bdev1", 00:28:28.272 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:28.272 "strip_size_kb": 0, 00:28:28.272 "state": "online", 00:28:28.272 "raid_level": "raid1", 00:28:28.272 "superblock": true, 00:28:28.272 "num_base_bdevs": 4, 00:28:28.272 "num_base_bdevs_discovered": 2, 00:28:28.272 "num_base_bdevs_operational": 2, 00:28:28.272 "base_bdevs_list": [ 00:28:28.272 { 00:28:28.272 "name": null, 00:28:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.272 "is_configured": false, 00:28:28.272 "data_offset": 0, 00:28:28.272 "data_size": 63488 00:28:28.272 }, 00:28:28.272 { 00:28:28.272 "name": null, 00:28:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.272 "is_configured": false, 00:28:28.272 "data_offset": 2048, 00:28:28.272 "data_size": 63488 00:28:28.272 }, 00:28:28.272 { 00:28:28.272 "name": "BaseBdev3", 00:28:28.272 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:28.272 "is_configured": true, 00:28:28.272 "data_offset": 2048, 00:28:28.272 "data_size": 63488 00:28:28.272 }, 00:28:28.272 { 00:28:28.272 "name": "BaseBdev4", 00:28:28.272 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:28.272 "is_configured": true, 00:28:28.272 "data_offset": 2048, 00:28:28.272 "data_size": 63488 00:28:28.272 } 00:28:28.272 ] 00:28:28.272 }' 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:28.272 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.530 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:28.530 "name": "raid_bdev1", 00:28:28.530 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:28.530 "strip_size_kb": 0, 00:28:28.530 "state": "online", 00:28:28.530 "raid_level": "raid1", 00:28:28.530 "superblock": true, 00:28:28.530 "num_base_bdevs": 4, 00:28:28.530 "num_base_bdevs_discovered": 2, 00:28:28.530 "num_base_bdevs_operational": 2, 00:28:28.530 "base_bdevs_list": [ 00:28:28.530 { 00:28:28.530 "name": null, 00:28:28.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.530 "is_configured": false, 00:28:28.530 "data_offset": 0, 00:28:28.530 "data_size": 63488 00:28:28.530 }, 00:28:28.530 { 00:28:28.530 "name": null, 00:28:28.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.530 "is_configured": false, 00:28:28.530 "data_offset": 2048, 00:28:28.530 "data_size": 63488 00:28:28.530 }, 00:28:28.530 { 00:28:28.530 "name": "BaseBdev3", 00:28:28.530 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:28.530 "is_configured": true, 00:28:28.530 "data_offset": 2048, 00:28:28.530 "data_size": 63488 00:28:28.530 }, 00:28:28.530 { 00:28:28.530 "name": "BaseBdev4", 00:28:28.530 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:28.531 "is_configured": true, 00:28:28.531 "data_offset": 2048, 00:28:28.531 "data_size": 63488 00:28:28.531 } 00:28:28.531 ] 00:28:28.531 }' 00:28:28.531 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:28.531 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:28.531 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.789 [2024-11-20 07:25:52.827132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:28.789 [2024-11-20 07:25:52.827409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:28:28.789 [2024-11-20 07:25:52.827428] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:28.789 request: 00:28:28.789 { 00:28:28.789 "base_bdev": "BaseBdev1", 00:28:28.789 "raid_bdev": "raid_bdev1", 00:28:28.789 "method": "bdev_raid_add_base_bdev", 00:28:28.789 "req_id": 1 00:28:28.789 } 00:28:28.789 Got JSON-RPC error response 00:28:28.789 response: 00:28:28.789 { 00:28:28.789 "code": -22, 00:28:28.789 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:28.789 } 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.789 07:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:29.724 "name": "raid_bdev1", 00:28:29.724 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:29.724 "strip_size_kb": 0, 00:28:29.724 "state": "online", 00:28:29.724 "raid_level": "raid1", 00:28:29.724 "superblock": true, 00:28:29.724 "num_base_bdevs": 4, 00:28:29.724 "num_base_bdevs_discovered": 2, 00:28:29.724 "num_base_bdevs_operational": 2, 00:28:29.724 "base_bdevs_list": [ 00:28:29.724 { 00:28:29.724 "name": null, 00:28:29.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.724 "is_configured": false, 00:28:29.724 "data_offset": 0, 00:28:29.724 "data_size": 63488 00:28:29.724 }, 00:28:29.724 { 00:28:29.724 "name": null, 00:28:29.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.724 "is_configured": false, 00:28:29.724 "data_offset": 2048, 00:28:29.724 "data_size": 63488 00:28:29.724 }, 00:28:29.724 { 00:28:29.724 "name": "BaseBdev3", 00:28:29.724 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:29.724 "is_configured": true, 00:28:29.724 "data_offset": 2048, 00:28:29.724 "data_size": 63488 00:28:29.724 }, 00:28:29.724 { 00:28:29.724 "name": "BaseBdev4", 00:28:29.724 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:29.724 "is_configured": true, 00:28:29.724 "data_offset": 2048, 00:28:29.724 "data_size": 63488 00:28:29.724 } 00:28:29.724 ] 00:28:29.724 }' 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:29.724 07:25:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.291 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:30.292 "name": "raid_bdev1", 00:28:30.292 "uuid": "e61a88f8-0017-4ae5-87fe-f71913ee1a95", 00:28:30.292 "strip_size_kb": 0, 00:28:30.292 "state": "online", 00:28:30.292 "raid_level": "raid1", 00:28:30.292 "superblock": true, 00:28:30.292 "num_base_bdevs": 4, 00:28:30.292 "num_base_bdevs_discovered": 2, 00:28:30.292 "num_base_bdevs_operational": 2, 00:28:30.292 "base_bdevs_list": [ 00:28:30.292 { 00:28:30.292 "name": null, 00:28:30.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.292 "is_configured": false, 00:28:30.292 "data_offset": 0, 00:28:30.292 "data_size": 63488 00:28:30.292 }, 00:28:30.292 { 00:28:30.292 "name": null, 00:28:30.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.292 "is_configured": false, 00:28:30.292 "data_offset": 2048, 00:28:30.292 "data_size": 63488 00:28:30.292 }, 00:28:30.292 { 00:28:30.292 "name": "BaseBdev3", 00:28:30.292 "uuid": "721e80fc-8a9c-5c93-ba36-bbaf3da41e0d", 00:28:30.292 "is_configured": true, 00:28:30.292 "data_offset": 2048, 00:28:30.292 "data_size": 63488 00:28:30.292 }, 00:28:30.292 { 00:28:30.292 "name": "BaseBdev4", 00:28:30.292 "uuid": "81c8c919-69c5-5cde-8c3d-b4cd3aff22de", 00:28:30.292 "is_configured": true, 00:28:30.292 "data_offset": 2048, 00:28:30.292 "data_size": 63488 00:28:30.292 } 00:28:30.292 ] 00:28:30.292 }' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78488 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78488 ']' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78488 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78488 00:28:30.292 killing process with pid 78488 00:28:30.292 Received shutdown signal, test time was about 60.000000 seconds 00:28:30.292 00:28:30.292 Latency(us) 00:28:30.292 [2024-11-20T07:25:54.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.292 [2024-11-20T07:25:54.581Z] =================================================================================================================== 00:28:30.292 [2024-11-20T07:25:54.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78488' 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78488 00:28:30.292 [2024-11-20 07:25:54.551756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:30.292 07:25:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78488 00:28:30.292 [2024-11-20 07:25:54.551939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.292 [2024-11-20 07:25:54.552060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.292 [2024-11-20 07:25:54.552077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:30.859 [2024-11-20 07:25:54.964785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:31.797 ************************************ 00:28:31.797 END TEST raid_rebuild_test_sb 00:28:31.797 ************************************ 00:28:31.797 07:25:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:28:31.797 00:28:31.797 real 0m28.586s 00:28:31.797 user 0m35.148s 00:28:31.797 sys 0m3.864s 00:28:31.797 07:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.797 07:25:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:31.797 07:25:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:28:31.797 07:25:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:31.797 07:25:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.797 07:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:31.797 ************************************ 00:28:31.797 START TEST raid_rebuild_test_io 00:28:31.797 ************************************ 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79274 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79274 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79274 ']' 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.797 07:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:32.056 [2024-11-20 07:25:56.148363] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:32.056 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:32.056 Zero copy mechanism will not be used. 00:28:32.056 [2024-11-20 07:25:56.148563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79274 ] 00:28:32.056 [2024-11-20 07:25:56.336441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.315 [2024-11-20 07:25:56.462993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.574 [2024-11-20 07:25:56.652524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:32.574 [2024-11-20 07:25:56.652645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:32.833 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.833 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:28:32.833 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:32.833 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:32.833 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.833 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 BaseBdev1_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 [2024-11-20 07:25:57.134407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:33.093 [2024-11-20 07:25:57.134497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.093 [2024-11-20 07:25:57.134533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:33.093 [2024-11-20 07:25:57.134553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.093 [2024-11-20 07:25:57.137417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.093 [2024-11-20 07:25:57.137481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:33.093 BaseBdev1 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 BaseBdev2_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 [2024-11-20 07:25:57.193364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:33.093 [2024-11-20 07:25:57.193447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.093 [2024-11-20 07:25:57.193477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:33.093 [2024-11-20 07:25:57.193498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.093 [2024-11-20 07:25:57.196261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.093 [2024-11-20 07:25:57.196337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:33.093 BaseBdev2 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 BaseBdev3_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 [2024-11-20 07:25:57.266506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:33.093 [2024-11-20 07:25:57.266615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.093 [2024-11-20 07:25:57.266651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:33.093 [2024-11-20 07:25:57.266671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.093 [2024-11-20 07:25:57.269346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.093 [2024-11-20 07:25:57.269417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:33.093 BaseBdev3 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 BaseBdev4_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 [2024-11-20 07:25:57.320254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:33.093 [2024-11-20 07:25:57.320337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.093 [2024-11-20 07:25:57.320372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:33.093 [2024-11-20 07:25:57.320390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.093 [2024-11-20 07:25:57.323260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.093 [2024-11-20 07:25:57.323312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:33.093 BaseBdev4 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 spare_malloc 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 spare_delay 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.093 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.353 [2024-11-20 07:25:57.386408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:33.353 [2024-11-20 07:25:57.386513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.353 [2024-11-20 07:25:57.386544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:33.353 [2024-11-20 07:25:57.386564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.353 [2024-11-20 07:25:57.389718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.353 [2024-11-20 07:25:57.389793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:33.353 spare 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.353 [2024-11-20 07:25:57.398486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:33.353 [2024-11-20 07:25:57.401088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:33.353 [2024-11-20 07:25:57.401189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:33.353 [2024-11-20 07:25:57.401283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:33.353 [2024-11-20 07:25:57.401394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:33.353 [2024-11-20 07:25:57.401417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:33.353 [2024-11-20 07:25:57.401767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:33.353 [2024-11-20 07:25:57.402009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:33.353 [2024-11-20 07:25:57.402039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:33.353 [2024-11-20 07:25:57.402231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:33.353 "name": "raid_bdev1", 00:28:33.353 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:33.353 "strip_size_kb": 0, 00:28:33.353 "state": "online", 00:28:33.353 "raid_level": "raid1", 00:28:33.353 "superblock": false, 00:28:33.353 "num_base_bdevs": 4, 00:28:33.353 "num_base_bdevs_discovered": 4, 00:28:33.353 "num_base_bdevs_operational": 4, 00:28:33.353 "base_bdevs_list": [ 00:28:33.353 { 00:28:33.353 "name": "BaseBdev1", 00:28:33.353 "uuid": "00ac3c2f-8d9c-526f-8cf5-8e2a3a878681", 00:28:33.353 "is_configured": true, 00:28:33.353 "data_offset": 0, 00:28:33.353 "data_size": 65536 00:28:33.353 }, 00:28:33.353 { 00:28:33.353 "name": "BaseBdev2", 00:28:33.353 "uuid": "53f958d3-3d4f-5980-8782-4eecd6150923", 00:28:33.353 "is_configured": true, 00:28:33.353 "data_offset": 0, 00:28:33.353 "data_size": 65536 00:28:33.353 }, 00:28:33.353 { 00:28:33.353 "name": "BaseBdev3", 00:28:33.353 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:33.353 "is_configured": true, 00:28:33.353 "data_offset": 0, 00:28:33.353 "data_size": 65536 00:28:33.353 }, 00:28:33.353 { 00:28:33.353 "name": "BaseBdev4", 00:28:33.353 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:33.353 "is_configured": true, 00:28:33.353 "data_offset": 0, 00:28:33.353 "data_size": 65536 00:28:33.353 } 00:28:33.353 ] 00:28:33.353 }' 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:33.353 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.921 [2024-11-20 07:25:57.927136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.921 07:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.921 [2024-11-20 07:25:58.050679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:33.921 "name": "raid_bdev1", 00:28:33.921 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:33.921 "strip_size_kb": 0, 00:28:33.921 "state": "online", 00:28:33.921 "raid_level": "raid1", 00:28:33.921 "superblock": false, 00:28:33.921 "num_base_bdevs": 4, 00:28:33.921 "num_base_bdevs_discovered": 3, 00:28:33.921 "num_base_bdevs_operational": 3, 00:28:33.921 "base_bdevs_list": [ 00:28:33.921 { 00:28:33.921 "name": null, 00:28:33.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.921 "is_configured": false, 00:28:33.921 "data_offset": 0, 00:28:33.921 "data_size": 65536 00:28:33.921 }, 00:28:33.921 { 00:28:33.921 "name": "BaseBdev2", 00:28:33.921 "uuid": "53f958d3-3d4f-5980-8782-4eecd6150923", 00:28:33.921 "is_configured": true, 00:28:33.921 "data_offset": 0, 00:28:33.921 "data_size": 65536 00:28:33.921 }, 00:28:33.921 { 00:28:33.921 "name": "BaseBdev3", 00:28:33.921 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:33.921 "is_configured": true, 00:28:33.921 "data_offset": 0, 00:28:33.921 "data_size": 65536 00:28:33.921 }, 00:28:33.921 { 00:28:33.921 "name": "BaseBdev4", 00:28:33.921 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:33.921 "is_configured": true, 00:28:33.921 "data_offset": 0, 00:28:33.921 "data_size": 65536 00:28:33.921 } 00:28:33.921 ] 00:28:33.921 }' 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:33.921 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.921 [2024-11-20 07:25:58.179311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:33.921 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:33.921 Zero copy mechanism will not be used. 00:28:33.921 Running I/O for 60 seconds... 00:28:34.489 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:34.489 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.489 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:34.489 [2024-11-20 07:25:58.594530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:34.489 07:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.489 07:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:34.489 [2024-11-20 07:25:58.652430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:34.489 [2024-11-20 07:25:58.655175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:34.489 [2024-11-20 07:25:58.756805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:34.489 [2024-11-20 07:25:58.757575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:34.748 [2024-11-20 07:25:58.982262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:34.748 [2024-11-20 07:25:58.982746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:35.268 162.00 IOPS, 486.00 MiB/s [2024-11-20T07:25:59.557Z] [2024-11-20 07:25:59.454172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:35.528 "name": "raid_bdev1", 00:28:35.528 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:35.528 "strip_size_kb": 0, 00:28:35.528 "state": "online", 00:28:35.528 "raid_level": "raid1", 00:28:35.528 "superblock": false, 00:28:35.528 "num_base_bdevs": 4, 00:28:35.528 "num_base_bdevs_discovered": 4, 00:28:35.528 "num_base_bdevs_operational": 4, 00:28:35.528 "process": { 00:28:35.528 "type": "rebuild", 00:28:35.528 "target": "spare", 00:28:35.528 "progress": { 00:28:35.528 "blocks": 12288, 00:28:35.528 "percent": 18 00:28:35.528 } 00:28:35.528 }, 00:28:35.528 "base_bdevs_list": [ 00:28:35.528 { 00:28:35.528 "name": "spare", 00:28:35.528 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:35.528 "is_configured": true, 00:28:35.528 "data_offset": 0, 00:28:35.528 "data_size": 65536 00:28:35.528 }, 00:28:35.528 { 00:28:35.528 "name": "BaseBdev2", 00:28:35.528 "uuid": "53f958d3-3d4f-5980-8782-4eecd6150923", 00:28:35.528 "is_configured": true, 00:28:35.528 "data_offset": 0, 00:28:35.528 "data_size": 65536 00:28:35.528 }, 00:28:35.528 { 00:28:35.528 "name": "BaseBdev3", 00:28:35.528 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:35.528 "is_configured": true, 00:28:35.528 "data_offset": 0, 00:28:35.528 "data_size": 65536 00:28:35.528 }, 00:28:35.528 { 00:28:35.528 "name": "BaseBdev4", 00:28:35.528 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:35.528 "is_configured": true, 00:28:35.528 "data_offset": 0, 00:28:35.528 "data_size": 65536 00:28:35.528 } 00:28:35.528 ] 00:28:35.528 }' 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:35.528 [2024-11-20 07:25:59.718216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:35.528 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.529 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:35.529 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.529 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:35.529 07:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.529 07:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:35.529 [2024-11-20 07:25:59.810351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:35.789 [2024-11-20 07:25:59.841647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:35.789 [2024-11-20 07:25:59.941753] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:35.789 [2024-11-20 07:25:59.946442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.789 [2024-11-20 07:25:59.946514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:35.789 [2024-11-20 07:25:59.946529] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:35.789 [2024-11-20 07:25:59.975976] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:28:35.789 07:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.789 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:35.789 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:35.789 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:35.789 07:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:35.789 "name": "raid_bdev1", 00:28:35.789 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:35.789 "strip_size_kb": 0, 00:28:35.789 "state": "online", 00:28:35.789 "raid_level": "raid1", 00:28:35.789 "superblock": false, 00:28:35.789 "num_base_bdevs": 4, 00:28:35.789 "num_base_bdevs_discovered": 3, 00:28:35.789 "num_base_bdevs_operational": 3, 00:28:35.789 "base_bdevs_list": [ 00:28:35.789 { 00:28:35.789 "name": null, 00:28:35.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.789 "is_configured": false, 00:28:35.789 "data_offset": 0, 00:28:35.789 "data_size": 65536 00:28:35.789 }, 00:28:35.789 { 00:28:35.789 "name": "BaseBdev2", 00:28:35.789 "uuid": "53f958d3-3d4f-5980-8782-4eecd6150923", 00:28:35.789 "is_configured": true, 00:28:35.789 "data_offset": 0, 00:28:35.789 "data_size": 65536 00:28:35.789 }, 00:28:35.789 { 00:28:35.789 "name": "BaseBdev3", 00:28:35.789 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:35.789 "is_configured": true, 00:28:35.789 "data_offset": 0, 00:28:35.789 "data_size": 65536 00:28:35.789 }, 00:28:35.789 { 00:28:35.789 "name": "BaseBdev4", 00:28:35.789 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:35.789 "is_configured": true, 00:28:35.789 "data_offset": 0, 00:28:35.789 "data_size": 65536 00:28:35.789 } 00:28:35.789 ] 00:28:35.789 }' 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:35.789 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.306 131.00 IOPS, 393.00 MiB/s [2024-11-20T07:26:00.595Z] 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.307 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:36.566 "name": "raid_bdev1", 00:28:36.566 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:36.566 "strip_size_kb": 0, 00:28:36.566 "state": "online", 00:28:36.566 "raid_level": "raid1", 00:28:36.566 "superblock": false, 00:28:36.566 "num_base_bdevs": 4, 00:28:36.566 "num_base_bdevs_discovered": 3, 00:28:36.566 "num_base_bdevs_operational": 3, 00:28:36.566 "base_bdevs_list": [ 00:28:36.566 { 00:28:36.566 "name": null, 00:28:36.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.566 "is_configured": false, 00:28:36.566 "data_offset": 0, 00:28:36.566 "data_size": 65536 00:28:36.566 }, 00:28:36.566 { 00:28:36.566 "name": "BaseBdev2", 00:28:36.566 "uuid": "53f958d3-3d4f-5980-8782-4eecd6150923", 00:28:36.566 "is_configured": true, 00:28:36.566 "data_offset": 0, 00:28:36.566 "data_size": 65536 00:28:36.566 }, 00:28:36.566 { 00:28:36.566 "name": "BaseBdev3", 00:28:36.566 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:36.566 "is_configured": true, 00:28:36.566 "data_offset": 0, 00:28:36.566 "data_size": 65536 00:28:36.566 }, 00:28:36.566 { 00:28:36.566 "name": "BaseBdev4", 00:28:36.566 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:36.566 "is_configured": true, 00:28:36.566 "data_offset": 0, 00:28:36.566 "data_size": 65536 00:28:36.566 } 00:28:36.566 ] 00:28:36.566 }' 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.566 [2024-11-20 07:26:00.763141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.566 07:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:36.566 [2024-11-20 07:26:00.837717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:36.566 [2024-11-20 07:26:00.840327] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.825 [2024-11-20 07:26:00.974568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:37.084 137.67 IOPS, 413.00 MiB/s [2024-11-20T07:26:01.373Z] [2024-11-20 07:26:01.205131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:37.084 [2024-11-20 07:26:01.206167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:37.652 "name": "raid_bdev1", 00:28:37.652 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:37.652 "strip_size_kb": 0, 00:28:37.652 "state": "online", 00:28:37.652 "raid_level": "raid1", 00:28:37.652 "superblock": false, 00:28:37.652 "num_base_bdevs": 4, 00:28:37.652 "num_base_bdevs_discovered": 4, 00:28:37.652 "num_base_bdevs_operational": 4, 00:28:37.652 "process": { 00:28:37.652 "type": "rebuild", 00:28:37.652 "target": "spare", 00:28:37.652 "progress": { 00:28:37.652 "blocks": 10240, 00:28:37.652 "percent": 15 00:28:37.652 } 00:28:37.652 }, 00:28:37.652 "base_bdevs_list": [ 00:28:37.652 { 00:28:37.652 "name": "spare", 00:28:37.652 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:37.652 "is_configured": true, 00:28:37.652 "data_offset": 0, 00:28:37.652 "data_size": 65536 00:28:37.652 }, 00:28:37.652 { 00:28:37.652 "name": "BaseBdev2", 00:28:37.652 "uuid": "53f958d3-3d4f-5980-8782-4eecd6150923", 00:28:37.652 "is_configured": true, 00:28:37.652 "data_offset": 0, 00:28:37.652 "data_size": 65536 00:28:37.652 }, 00:28:37.652 { 00:28:37.652 "name": "BaseBdev3", 00:28:37.652 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:37.652 "is_configured": true, 00:28:37.652 "data_offset": 0, 00:28:37.652 "data_size": 65536 00:28:37.652 }, 00:28:37.652 { 00:28:37.652 "name": "BaseBdev4", 00:28:37.652 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:37.652 "is_configured": true, 00:28:37.652 "data_offset": 0, 00:28:37.652 "data_size": 65536 00:28:37.652 } 00:28:37.652 ] 00:28:37.652 }' 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.652 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.911 07:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:37.911 [2024-11-20 07:26:01.984424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:38.172 126.00 IOPS, 378.00 MiB/s [2024-11-20T07:26:02.461Z] [2024-11-20 07:26:02.220987] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:28:38.172 [2024-11-20 07:26:02.221057] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:28:38.172 [2024-11-20 07:26:02.222023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:38.172 [2024-11-20 07:26:02.232163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:38.172 "name": "raid_bdev1", 00:28:38.172 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:38.172 "strip_size_kb": 0, 00:28:38.172 "state": "online", 00:28:38.172 "raid_level": "raid1", 00:28:38.172 "superblock": false, 00:28:38.172 "num_base_bdevs": 4, 00:28:38.172 "num_base_bdevs_discovered": 3, 00:28:38.172 "num_base_bdevs_operational": 3, 00:28:38.172 "process": { 00:28:38.172 "type": "rebuild", 00:28:38.172 "target": "spare", 00:28:38.172 "progress": { 00:28:38.172 "blocks": 16384, 00:28:38.172 "percent": 25 00:28:38.172 } 00:28:38.172 }, 00:28:38.172 "base_bdevs_list": [ 00:28:38.172 { 00:28:38.172 "name": "spare", 00:28:38.172 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:38.172 "is_configured": true, 00:28:38.172 "data_offset": 0, 00:28:38.172 "data_size": 65536 00:28:38.172 }, 00:28:38.172 { 00:28:38.172 "name": null, 00:28:38.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.172 "is_configured": false, 00:28:38.172 "data_offset": 0, 00:28:38.172 "data_size": 65536 00:28:38.172 }, 00:28:38.172 { 00:28:38.172 "name": "BaseBdev3", 00:28:38.172 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:38.172 "is_configured": true, 00:28:38.172 "data_offset": 0, 00:28:38.172 "data_size": 65536 00:28:38.172 }, 00:28:38.172 { 00:28:38.172 "name": "BaseBdev4", 00:28:38.172 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:38.172 "is_configured": true, 00:28:38.172 "data_offset": 0, 00:28:38.172 "data_size": 65536 00:28:38.172 } 00:28:38.172 ] 00:28:38.172 }' 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:38.172 07:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.173 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:38.173 "name": "raid_bdev1", 00:28:38.173 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:38.173 "strip_size_kb": 0, 00:28:38.173 "state": "online", 00:28:38.173 "raid_level": "raid1", 00:28:38.173 "superblock": false, 00:28:38.173 "num_base_bdevs": 4, 00:28:38.173 "num_base_bdevs_discovered": 3, 00:28:38.173 "num_base_bdevs_operational": 3, 00:28:38.173 "process": { 00:28:38.173 "type": "rebuild", 00:28:38.173 "target": "spare", 00:28:38.173 "progress": { 00:28:38.173 "blocks": 16384, 00:28:38.173 "percent": 25 00:28:38.173 } 00:28:38.173 }, 00:28:38.173 "base_bdevs_list": [ 00:28:38.173 { 00:28:38.173 "name": "spare", 00:28:38.173 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:38.173 "is_configured": true, 00:28:38.173 "data_offset": 0, 00:28:38.173 "data_size": 65536 00:28:38.173 }, 00:28:38.173 { 00:28:38.173 "name": null, 00:28:38.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.173 "is_configured": false, 00:28:38.173 "data_offset": 0, 00:28:38.173 "data_size": 65536 00:28:38.173 }, 00:28:38.173 { 00:28:38.173 "name": "BaseBdev3", 00:28:38.173 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:38.173 "is_configured": true, 00:28:38.173 "data_offset": 0, 00:28:38.173 "data_size": 65536 00:28:38.173 }, 00:28:38.173 { 00:28:38.173 "name": "BaseBdev4", 00:28:38.173 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:38.173 "is_configured": true, 00:28:38.173 "data_offset": 0, 00:28:38.173 "data_size": 65536 00:28:38.173 } 00:28:38.173 ] 00:28:38.173 }' 00:28:38.173 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:38.432 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.432 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:38.432 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.432 07:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:38.432 [2024-11-20 07:26:02.568991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:38.432 [2024-11-20 07:26:02.569760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:38.691 [2024-11-20 07:26:02.820226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:38.949 [2024-11-20 07:26:03.151857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:39.517 114.20 IOPS, 342.60 MiB/s [2024-11-20T07:26:03.806Z] 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.517 [2024-11-20 07:26:03.569029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.517 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:39.517 "name": "raid_bdev1", 00:28:39.517 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:39.517 "strip_size_kb": 0, 00:28:39.517 "state": "online", 00:28:39.517 "raid_level": "raid1", 00:28:39.517 "superblock": false, 00:28:39.517 "num_base_bdevs": 4, 00:28:39.517 "num_base_bdevs_discovered": 3, 00:28:39.517 "num_base_bdevs_operational": 3, 00:28:39.517 "process": { 00:28:39.517 "type": "rebuild", 00:28:39.517 "target": "spare", 00:28:39.517 "progress": { 00:28:39.517 "blocks": 30720, 00:28:39.517 "percent": 46 00:28:39.517 } 00:28:39.517 }, 00:28:39.517 "base_bdevs_list": [ 00:28:39.517 { 00:28:39.517 "name": "spare", 00:28:39.517 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:39.517 "is_configured": true, 00:28:39.517 "data_offset": 0, 00:28:39.517 "data_size": 65536 00:28:39.517 }, 00:28:39.517 { 00:28:39.517 "name": null, 00:28:39.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.517 "is_configured": false, 00:28:39.517 "data_offset": 0, 00:28:39.517 "data_size": 65536 00:28:39.517 }, 00:28:39.517 { 00:28:39.517 "name": "BaseBdev3", 00:28:39.517 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:39.517 "is_configured": true, 00:28:39.517 "data_offset": 0, 00:28:39.517 "data_size": 65536 00:28:39.517 }, 00:28:39.517 { 00:28:39.517 "name": "BaseBdev4", 00:28:39.518 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:39.518 "is_configured": true, 00:28:39.518 "data_offset": 0, 00:28:39.518 "data_size": 65536 00:28:39.518 } 00:28:39.518 ] 00:28:39.518 }' 00:28:39.518 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:39.518 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.518 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:39.518 [2024-11-20 07:26:03.689229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:39.518 [2024-11-20 07:26:03.689670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:39.518 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.518 07:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:40.366 106.50 IOPS, 319.50 MiB/s [2024-11-20T07:26:04.655Z] [2024-11-20 07:26:04.406710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:40.625 "name": "raid_bdev1", 00:28:40.625 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:40.625 "strip_size_kb": 0, 00:28:40.625 "state": "online", 00:28:40.625 "raid_level": "raid1", 00:28:40.625 "superblock": false, 00:28:40.625 "num_base_bdevs": 4, 00:28:40.625 "num_base_bdevs_discovered": 3, 00:28:40.625 "num_base_bdevs_operational": 3, 00:28:40.625 "process": { 00:28:40.625 "type": "rebuild", 00:28:40.625 "target": "spare", 00:28:40.625 "progress": { 00:28:40.625 "blocks": 49152, 00:28:40.625 "percent": 75 00:28:40.625 } 00:28:40.625 }, 00:28:40.625 "base_bdevs_list": [ 00:28:40.625 { 00:28:40.625 "name": "spare", 00:28:40.625 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:40.625 "is_configured": true, 00:28:40.625 "data_offset": 0, 00:28:40.625 "data_size": 65536 00:28:40.625 }, 00:28:40.625 { 00:28:40.625 "name": null, 00:28:40.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.625 "is_configured": false, 00:28:40.625 "data_offset": 0, 00:28:40.625 "data_size": 65536 00:28:40.625 }, 00:28:40.625 { 00:28:40.625 "name": "BaseBdev3", 00:28:40.625 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:40.625 "is_configured": true, 00:28:40.625 "data_offset": 0, 00:28:40.625 "data_size": 65536 00:28:40.625 }, 00:28:40.625 { 00:28:40.625 "name": "BaseBdev4", 00:28:40.625 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:40.625 "is_configured": true, 00:28:40.625 "data_offset": 0, 00:28:40.625 "data_size": 65536 00:28:40.625 } 00:28:40.625 ] 00:28:40.625 }' 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.625 07:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:41.200 [2024-11-20 07:26:05.191204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:28:41.463 96.86 IOPS, 290.57 MiB/s [2024-11-20T07:26:05.752Z] [2024-11-20 07:26:05.642643] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:41.463 [2024-11-20 07:26:05.751003] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:41.722 [2024-11-20 07:26:05.754815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:41.722 "name": "raid_bdev1", 00:28:41.722 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:41.722 "strip_size_kb": 0, 00:28:41.722 "state": "online", 00:28:41.722 "raid_level": "raid1", 00:28:41.722 "superblock": false, 00:28:41.722 "num_base_bdevs": 4, 00:28:41.722 "num_base_bdevs_discovered": 3, 00:28:41.722 "num_base_bdevs_operational": 3, 00:28:41.722 "base_bdevs_list": [ 00:28:41.722 { 00:28:41.722 "name": "spare", 00:28:41.722 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:41.722 "is_configured": true, 00:28:41.722 "data_offset": 0, 00:28:41.722 "data_size": 65536 00:28:41.722 }, 00:28:41.722 { 00:28:41.722 "name": null, 00:28:41.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.722 "is_configured": false, 00:28:41.722 "data_offset": 0, 00:28:41.722 "data_size": 65536 00:28:41.722 }, 00:28:41.722 { 00:28:41.722 "name": "BaseBdev3", 00:28:41.722 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:41.722 "is_configured": true, 00:28:41.722 "data_offset": 0, 00:28:41.722 "data_size": 65536 00:28:41.722 }, 00:28:41.722 { 00:28:41.722 "name": "BaseBdev4", 00:28:41.722 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:41.722 "is_configured": true, 00:28:41.722 "data_offset": 0, 00:28:41.722 "data_size": 65536 00:28:41.722 } 00:28:41.722 ] 00:28:41.722 }' 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:41.722 07:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:41.981 "name": "raid_bdev1", 00:28:41.981 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:41.981 "strip_size_kb": 0, 00:28:41.981 "state": "online", 00:28:41.981 "raid_level": "raid1", 00:28:41.981 "superblock": false, 00:28:41.981 "num_base_bdevs": 4, 00:28:41.981 "num_base_bdevs_discovered": 3, 00:28:41.981 "num_base_bdevs_operational": 3, 00:28:41.981 "base_bdevs_list": [ 00:28:41.981 { 00:28:41.981 "name": "spare", 00:28:41.981 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:41.981 "is_configured": true, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 }, 00:28:41.981 { 00:28:41.981 "name": null, 00:28:41.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.981 "is_configured": false, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 }, 00:28:41.981 { 00:28:41.981 "name": "BaseBdev3", 00:28:41.981 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:41.981 "is_configured": true, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 }, 00:28:41.981 { 00:28:41.981 "name": "BaseBdev4", 00:28:41.981 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:41.981 "is_configured": true, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 } 00:28:41.981 ] 00:28:41.981 }' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:41.981 88.62 IOPS, 265.88 MiB/s [2024-11-20T07:26:06.270Z] 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:41.981 "name": "raid_bdev1", 00:28:41.981 "uuid": "6891701e-e160-4a55-a32a-c0c4e74c990a", 00:28:41.981 "strip_size_kb": 0, 00:28:41.981 "state": "online", 00:28:41.981 "raid_level": "raid1", 00:28:41.981 "superblock": false, 00:28:41.981 "num_base_bdevs": 4, 00:28:41.981 "num_base_bdevs_discovered": 3, 00:28:41.981 "num_base_bdevs_operational": 3, 00:28:41.981 "base_bdevs_list": [ 00:28:41.981 { 00:28:41.981 "name": "spare", 00:28:41.981 "uuid": "72d31c75-5f95-5843-b2e7-96c6905a7894", 00:28:41.981 "is_configured": true, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 }, 00:28:41.981 { 00:28:41.981 "name": null, 00:28:41.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.981 "is_configured": false, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 }, 00:28:41.981 { 00:28:41.981 "name": "BaseBdev3", 00:28:41.981 "uuid": "de2e7061-8e15-501d-af1e-78bc4bad7f50", 00:28:41.981 "is_configured": true, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 }, 00:28:41.981 { 00:28:41.981 "name": "BaseBdev4", 00:28:41.981 "uuid": "f8e2daa9-1829-5d59-a20f-4d5b50e8dcbc", 00:28:41.981 "is_configured": true, 00:28:41.981 "data_offset": 0, 00:28:41.981 "data_size": 65536 00:28:41.981 } 00:28:41.981 ] 00:28:41.981 }' 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:41.981 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:42.549 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:42.549 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.549 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:42.549 [2024-11-20 07:26:06.675581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:42.549 [2024-11-20 07:26:06.675647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:42.549 00:28:42.549 Latency(us) 00:28:42.549 [2024-11-20T07:26:06.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.549 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:42.549 raid_bdev1 : 8.58 85.46 256.37 0.00 0.00 16868.64 258.79 119156.36 00:28:42.549 [2024-11-20T07:26:06.838Z] =================================================================================================================== 00:28:42.549 [2024-11-20T07:26:06.838Z] Total : 85.46 256.37 0.00 0.00 16868.64 258.79 119156.36 00:28:42.549 [2024-11-20 07:26:06.776222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.549 [2024-11-20 07:26:06.776269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:42.549 [2024-11-20 07:26:06.776396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:42.549 [2024-11-20 07:26:06.776418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:42.549 { 00:28:42.549 "results": [ 00:28:42.549 { 00:28:42.549 "job": "raid_bdev1", 00:28:42.549 "core_mask": "0x1", 00:28:42.549 "workload": "randrw", 00:28:42.549 "percentage": 50, 00:28:42.550 "status": "finished", 00:28:42.550 "queue_depth": 2, 00:28:42.550 "io_size": 3145728, 00:28:42.550 "runtime": 8.577441, 00:28:42.550 "iops": 85.45672304828445, 00:28:42.550 "mibps": 256.3701691448533, 00:28:42.550 "io_failed": 0, 00:28:42.550 "io_timeout": 0, 00:28:42.550 "avg_latency_us": 16868.636606722062, 00:28:42.550 "min_latency_us": 258.7927272727273, 00:28:42.550 "max_latency_us": 119156.36363636363 00:28:42.550 } 00:28:42.550 ], 00:28:42.550 "core_count": 1 00:28:42.550 } 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:42.550 07:26:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:28:43.118 /dev/nbd0 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:43.118 1+0 records in 00:28:43.118 1+0 records out 00:28:43.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057637 s, 7.1 MB/s 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:43.118 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.119 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:28:43.376 /dev/nbd1 00:28:43.376 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:43.376 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:43.376 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:43.377 1+0 records in 00:28:43.377 1+0 records out 00:28:43.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246809 s, 16.6 MB/s 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.377 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:43.637 07:26:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:43.896 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.897 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:28:44.156 /dev/nbd1 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:44.156 1+0 records in 00:28:44.156 1+0 records out 00:28:44.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463871 s, 8.8 MB/s 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.156 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.414 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:44.672 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:44.673 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:44.673 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.673 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:44.673 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.673 07:26:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79274 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79274 ']' 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79274 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79274 00:28:44.931 killing process with pid 79274 00:28:44.931 Received shutdown signal, test time was about 10.924286 seconds 00:28:44.931 00:28:44.931 Latency(us) 00:28:44.931 [2024-11-20T07:26:09.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.931 [2024-11-20T07:26:09.220Z] =================================================================================================================== 00:28:44.931 [2024-11-20T07:26:09.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79274' 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79274 00:28:44.931 [2024-11-20 07:26:09.106363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:44.931 07:26:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79274 00:28:45.189 [2024-11-20 07:26:09.429956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:28:46.568 00:28:46.568 real 0m14.451s 00:28:46.568 user 0m18.983s 00:28:46.568 sys 0m1.796s 00:28:46.568 ************************************ 00:28:46.568 END TEST raid_rebuild_test_io 00:28:46.568 ************************************ 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:46.568 07:26:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:28:46.568 07:26:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:46.568 07:26:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.568 07:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:46.568 ************************************ 00:28:46.568 START TEST raid_rebuild_test_sb_io 00:28:46.568 ************************************ 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:46.568 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79690 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79690 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79690 ']' 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.569 07:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:46.569 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:46.569 Zero copy mechanism will not be used. 00:28:46.569 [2024-11-20 07:26:10.639069] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:46.569 [2024-11-20 07:26:10.639248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79690 ] 00:28:46.569 [2024-11-20 07:26:10.828467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.828 [2024-11-20 07:26:10.954474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.087 [2024-11-20 07:26:11.150195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:47.087 [2024-11-20 07:26:11.150249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:47.346 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.346 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:28:47.346 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:47.346 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:47.346 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.346 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 BaseBdev1_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 [2024-11-20 07:26:11.680357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:47.606 [2024-11-20 07:26:11.680609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.606 [2024-11-20 07:26:11.680683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:47.606 [2024-11-20 07:26:11.680854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.606 [2024-11-20 07:26:11.683838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.606 [2024-11-20 07:26:11.684053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:47.606 BaseBdev1 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 BaseBdev2_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 [2024-11-20 07:26:11.734816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:47.606 [2024-11-20 07:26:11.734897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.606 [2024-11-20 07:26:11.734923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:47.606 [2024-11-20 07:26:11.734969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.606 [2024-11-20 07:26:11.737852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.606 [2024-11-20 07:26:11.737910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:47.606 BaseBdev2 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 BaseBdev3_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 [2024-11-20 07:26:11.795140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:47.606 [2024-11-20 07:26:11.795431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.606 [2024-11-20 07:26:11.795469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:47.606 [2024-11-20 07:26:11.795488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.606 [2024-11-20 07:26:11.798180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.606 [2024-11-20 07:26:11.798240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:47.606 BaseBdev3 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 BaseBdev4_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 [2024-11-20 07:26:11.846627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:47.606 [2024-11-20 07:26:11.846699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.606 [2024-11-20 07:26:11.846724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:47.606 [2024-11-20 07:26:11.846741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.606 [2024-11-20 07:26:11.849312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.606 [2024-11-20 07:26:11.849509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:47.606 BaseBdev4 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.606 spare_malloc 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.606 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.866 spare_delay 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.866 [2024-11-20 07:26:11.907653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:47.866 [2024-11-20 07:26:11.907754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.866 [2024-11-20 07:26:11.907782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:47.866 [2024-11-20 07:26:11.907798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.866 [2024-11-20 07:26:11.910336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.866 [2024-11-20 07:26:11.910526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:47.866 spare 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.866 [2024-11-20 07:26:11.915745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:47.866 [2024-11-20 07:26:11.918219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:47.866 [2024-11-20 07:26:11.918474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:47.866 [2024-11-20 07:26:11.918606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:47.866 [2024-11-20 07:26:11.918937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:47.866 [2024-11-20 07:26:11.919038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:47.866 [2024-11-20 07:26:11.919517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:47.866 [2024-11-20 07:26:11.919928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:47.866 [2024-11-20 07:26:11.920117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:47.866 [2024-11-20 07:26:11.920526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.866 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:47.866 "name": "raid_bdev1", 00:28:47.866 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:47.866 "strip_size_kb": 0, 00:28:47.866 "state": "online", 00:28:47.866 "raid_level": "raid1", 00:28:47.866 "superblock": true, 00:28:47.866 "num_base_bdevs": 4, 00:28:47.866 "num_base_bdevs_discovered": 4, 00:28:47.866 "num_base_bdevs_operational": 4, 00:28:47.866 "base_bdevs_list": [ 00:28:47.866 { 00:28:47.866 "name": "BaseBdev1", 00:28:47.866 "uuid": "50fc99d8-0c9b-5e08-99d2-0b7de22f024f", 00:28:47.866 "is_configured": true, 00:28:47.866 "data_offset": 2048, 00:28:47.866 "data_size": 63488 00:28:47.866 }, 00:28:47.866 { 00:28:47.866 "name": "BaseBdev2", 00:28:47.866 "uuid": "f6f604f9-73e1-5a58-824b-f272ad4fedf5", 00:28:47.866 "is_configured": true, 00:28:47.866 "data_offset": 2048, 00:28:47.866 "data_size": 63488 00:28:47.866 }, 00:28:47.866 { 00:28:47.866 "name": "BaseBdev3", 00:28:47.866 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:47.866 "is_configured": true, 00:28:47.866 "data_offset": 2048, 00:28:47.866 "data_size": 63488 00:28:47.866 }, 00:28:47.866 { 00:28:47.866 "name": "BaseBdev4", 00:28:47.866 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:47.866 "is_configured": true, 00:28:47.866 "data_offset": 2048, 00:28:47.866 "data_size": 63488 00:28:47.866 } 00:28:47.866 ] 00:28:47.866 }' 00:28:47.867 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:47.867 07:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:48.435 [2024-11-20 07:26:12.461180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:48.435 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.436 [2024-11-20 07:26:12.568722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:48.436 "name": "raid_bdev1", 00:28:48.436 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:48.436 "strip_size_kb": 0, 00:28:48.436 "state": "online", 00:28:48.436 "raid_level": "raid1", 00:28:48.436 "superblock": true, 00:28:48.436 "num_base_bdevs": 4, 00:28:48.436 "num_base_bdevs_discovered": 3, 00:28:48.436 "num_base_bdevs_operational": 3, 00:28:48.436 "base_bdevs_list": [ 00:28:48.436 { 00:28:48.436 "name": null, 00:28:48.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.436 "is_configured": false, 00:28:48.436 "data_offset": 0, 00:28:48.436 "data_size": 63488 00:28:48.436 }, 00:28:48.436 { 00:28:48.436 "name": "BaseBdev2", 00:28:48.436 "uuid": "f6f604f9-73e1-5a58-824b-f272ad4fedf5", 00:28:48.436 "is_configured": true, 00:28:48.436 "data_offset": 2048, 00:28:48.436 "data_size": 63488 00:28:48.436 }, 00:28:48.436 { 00:28:48.436 "name": "BaseBdev3", 00:28:48.436 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:48.436 "is_configured": true, 00:28:48.436 "data_offset": 2048, 00:28:48.436 "data_size": 63488 00:28:48.436 }, 00:28:48.436 { 00:28:48.436 "name": "BaseBdev4", 00:28:48.436 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:48.436 "is_configured": true, 00:28:48.436 "data_offset": 2048, 00:28:48.436 "data_size": 63488 00:28:48.436 } 00:28:48.436 ] 00:28:48.436 }' 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:48.436 07:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.436 [2024-11-20 07:26:12.697145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:48.436 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:48.436 Zero copy mechanism will not be used. 00:28:48.436 Running I/O for 60 seconds... 00:28:49.004 07:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:49.004 07:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.004 07:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:49.004 [2024-11-20 07:26:13.096718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.004 07:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.004 07:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:49.004 [2024-11-20 07:26:13.184284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:49.004 [2024-11-20 07:26:13.187113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:49.263 [2024-11-20 07:26:13.316780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:49.263 [2024-11-20 07:26:13.318404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:49.263 [2024-11-20 07:26:13.542447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:49.263 [2024-11-20 07:26:13.542899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:49.781 124.00 IOPS, 372.00 MiB/s [2024-11-20T07:26:14.070Z] [2024-11-20 07:26:14.001805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:50.040 "name": "raid_bdev1", 00:28:50.040 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:50.040 "strip_size_kb": 0, 00:28:50.040 "state": "online", 00:28:50.040 "raid_level": "raid1", 00:28:50.040 "superblock": true, 00:28:50.040 "num_base_bdevs": 4, 00:28:50.040 "num_base_bdevs_discovered": 4, 00:28:50.040 "num_base_bdevs_operational": 4, 00:28:50.040 "process": { 00:28:50.040 "type": "rebuild", 00:28:50.040 "target": "spare", 00:28:50.040 "progress": { 00:28:50.040 "blocks": 10240, 00:28:50.040 "percent": 16 00:28:50.040 } 00:28:50.040 }, 00:28:50.040 "base_bdevs_list": [ 00:28:50.040 { 00:28:50.040 "name": "spare", 00:28:50.040 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:50.040 "is_configured": true, 00:28:50.040 "data_offset": 2048, 00:28:50.040 "data_size": 63488 00:28:50.040 }, 00:28:50.040 { 00:28:50.040 "name": "BaseBdev2", 00:28:50.040 "uuid": "f6f604f9-73e1-5a58-824b-f272ad4fedf5", 00:28:50.040 "is_configured": true, 00:28:50.040 "data_offset": 2048, 00:28:50.040 "data_size": 63488 00:28:50.040 }, 00:28:50.040 { 00:28:50.040 "name": "BaseBdev3", 00:28:50.040 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:50.040 "is_configured": true, 00:28:50.040 "data_offset": 2048, 00:28:50.040 "data_size": 63488 00:28:50.040 }, 00:28:50.040 { 00:28:50.040 "name": "BaseBdev4", 00:28:50.040 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:50.040 "is_configured": true, 00:28:50.040 "data_offset": 2048, 00:28:50.040 "data_size": 63488 00:28:50.040 } 00:28:50.040 ] 00:28:50.040 }' 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.040 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.040 [2024-11-20 07:26:14.321734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.299 [2024-11-20 07:26:14.468039] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:50.299 [2024-11-20 07:26:14.481528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.299 [2024-11-20 07:26:14.481772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.299 [2024-11-20 07:26:14.481830] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:50.299 [2024-11-20 07:26:14.510245] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:50.299 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:50.300 "name": "raid_bdev1", 00:28:50.300 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:50.300 "strip_size_kb": 0, 00:28:50.300 "state": "online", 00:28:50.300 "raid_level": "raid1", 00:28:50.300 "superblock": true, 00:28:50.300 "num_base_bdevs": 4, 00:28:50.300 "num_base_bdevs_discovered": 3, 00:28:50.300 "num_base_bdevs_operational": 3, 00:28:50.300 "base_bdevs_list": [ 00:28:50.300 { 00:28:50.300 "name": null, 00:28:50.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.300 "is_configured": false, 00:28:50.300 "data_offset": 0, 00:28:50.300 "data_size": 63488 00:28:50.300 }, 00:28:50.300 { 00:28:50.300 "name": "BaseBdev2", 00:28:50.300 "uuid": "f6f604f9-73e1-5a58-824b-f272ad4fedf5", 00:28:50.300 "is_configured": true, 00:28:50.300 "data_offset": 2048, 00:28:50.300 "data_size": 63488 00:28:50.300 }, 00:28:50.300 { 00:28:50.300 "name": "BaseBdev3", 00:28:50.300 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:50.300 "is_configured": true, 00:28:50.300 "data_offset": 2048, 00:28:50.300 "data_size": 63488 00:28:50.300 }, 00:28:50.300 { 00:28:50.300 "name": "BaseBdev4", 00:28:50.300 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:50.300 "is_configured": true, 00:28:50.300 "data_offset": 2048, 00:28:50.300 "data_size": 63488 00:28:50.300 } 00:28:50.300 ] 00:28:50.300 }' 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:50.300 07:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.818 99.00 IOPS, 297.00 MiB/s [2024-11-20T07:26:15.107Z] 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:50.818 "name": "raid_bdev1", 00:28:50.818 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:50.818 "strip_size_kb": 0, 00:28:50.818 "state": "online", 00:28:50.818 "raid_level": "raid1", 00:28:50.818 "superblock": true, 00:28:50.818 "num_base_bdevs": 4, 00:28:50.818 "num_base_bdevs_discovered": 3, 00:28:50.818 "num_base_bdevs_operational": 3, 00:28:50.818 "base_bdevs_list": [ 00:28:50.818 { 00:28:50.818 "name": null, 00:28:50.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.818 "is_configured": false, 00:28:50.818 "data_offset": 0, 00:28:50.818 "data_size": 63488 00:28:50.818 }, 00:28:50.818 { 00:28:50.818 "name": "BaseBdev2", 00:28:50.818 "uuid": "f6f604f9-73e1-5a58-824b-f272ad4fedf5", 00:28:50.818 "is_configured": true, 00:28:50.818 "data_offset": 2048, 00:28:50.818 "data_size": 63488 00:28:50.818 }, 00:28:50.818 { 00:28:50.818 "name": "BaseBdev3", 00:28:50.818 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:50.818 "is_configured": true, 00:28:50.818 "data_offset": 2048, 00:28:50.818 "data_size": 63488 00:28:50.818 }, 00:28:50.818 { 00:28:50.818 "name": "BaseBdev4", 00:28:50.818 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:50.818 "is_configured": true, 00:28:50.818 "data_offset": 2048, 00:28:50.818 "data_size": 63488 00:28:50.818 } 00:28:50.818 ] 00:28:50.818 }' 00:28:50.818 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:51.078 [2024-11-20 07:26:15.223213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.078 07:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:51.078 [2024-11-20 07:26:15.297617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:51.078 [2024-11-20 07:26:15.300117] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.337 [2024-11-20 07:26:15.432846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:51.337 [2024-11-20 07:26:15.434762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:51.597 [2024-11-20 07:26:15.651616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:51.597 [2024-11-20 07:26:15.652232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:51.856 114.67 IOPS, 344.00 MiB/s [2024-11-20T07:26:16.145Z] [2024-11-20 07:26:15.985934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:52.115 "name": "raid_bdev1", 00:28:52.115 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:52.115 "strip_size_kb": 0, 00:28:52.115 "state": "online", 00:28:52.115 "raid_level": "raid1", 00:28:52.115 "superblock": true, 00:28:52.115 "num_base_bdevs": 4, 00:28:52.115 "num_base_bdevs_discovered": 4, 00:28:52.115 "num_base_bdevs_operational": 4, 00:28:52.115 "process": { 00:28:52.115 "type": "rebuild", 00:28:52.115 "target": "spare", 00:28:52.115 "progress": { 00:28:52.115 "blocks": 12288, 00:28:52.115 "percent": 19 00:28:52.115 } 00:28:52.115 }, 00:28:52.115 "base_bdevs_list": [ 00:28:52.115 { 00:28:52.115 "name": "spare", 00:28:52.115 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:52.115 "is_configured": true, 00:28:52.115 "data_offset": 2048, 00:28:52.115 "data_size": 63488 00:28:52.115 }, 00:28:52.115 { 00:28:52.115 "name": "BaseBdev2", 00:28:52.115 "uuid": "f6f604f9-73e1-5a58-824b-f272ad4fedf5", 00:28:52.115 "is_configured": true, 00:28:52.115 "data_offset": 2048, 00:28:52.115 "data_size": 63488 00:28:52.115 }, 00:28:52.115 { 00:28:52.115 "name": "BaseBdev3", 00:28:52.115 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:52.115 "is_configured": true, 00:28:52.115 "data_offset": 2048, 00:28:52.115 "data_size": 63488 00:28:52.115 }, 00:28:52.115 { 00:28:52.115 "name": "BaseBdev4", 00:28:52.115 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:52.115 "is_configured": true, 00:28:52.115 "data_offset": 2048, 00:28:52.115 "data_size": 63488 00:28:52.115 } 00:28:52.115 ] 00:28:52.115 }' 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:52.115 [2024-11-20 07:26:16.357496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.115 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:52.374 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.374 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.374 [2024-11-20 07:26:16.450471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:52.374 [2024-11-20 07:26:16.589777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:52.374 [2024-11-20 07:26:16.590716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:52.632 101.00 IOPS, 303.00 MiB/s [2024-11-20T07:26:16.921Z] [2024-11-20 07:26:16.794617] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:28:52.632 [2024-11-20 07:26:16.794684] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:28:52.632 [2024-11-20 07:26:16.806003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:52.632 "name": "raid_bdev1", 00:28:52.632 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:52.632 "strip_size_kb": 0, 00:28:52.632 "state": "online", 00:28:52.632 "raid_level": "raid1", 00:28:52.632 "superblock": true, 00:28:52.632 "num_base_bdevs": 4, 00:28:52.632 "num_base_bdevs_discovered": 3, 00:28:52.632 "num_base_bdevs_operational": 3, 00:28:52.632 "process": { 00:28:52.632 "type": "rebuild", 00:28:52.632 "target": "spare", 00:28:52.632 "progress": { 00:28:52.632 "blocks": 16384, 00:28:52.632 "percent": 25 00:28:52.632 } 00:28:52.632 }, 00:28:52.632 "base_bdevs_list": [ 00:28:52.632 { 00:28:52.632 "name": "spare", 00:28:52.632 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:52.632 "is_configured": true, 00:28:52.632 "data_offset": 2048, 00:28:52.632 "data_size": 63488 00:28:52.632 }, 00:28:52.632 { 00:28:52.632 "name": null, 00:28:52.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.632 "is_configured": false, 00:28:52.632 "data_offset": 0, 00:28:52.632 "data_size": 63488 00:28:52.632 }, 00:28:52.632 { 00:28:52.632 "name": "BaseBdev3", 00:28:52.632 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:52.632 "is_configured": true, 00:28:52.632 "data_offset": 2048, 00:28:52.632 "data_size": 63488 00:28:52.632 }, 00:28:52.632 { 00:28:52.632 "name": "BaseBdev4", 00:28:52.632 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:52.632 "is_configured": true, 00:28:52.632 "data_offset": 2048, 00:28:52.632 "data_size": 63488 00:28:52.632 } 00:28:52.632 ] 00:28:52.632 }' 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:52.632 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.962 07:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.962 07:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:52.962 "name": "raid_bdev1", 00:28:52.962 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:52.962 "strip_size_kb": 0, 00:28:52.962 "state": "online", 00:28:52.962 "raid_level": "raid1", 00:28:52.962 "superblock": true, 00:28:52.962 "num_base_bdevs": 4, 00:28:52.962 "num_base_bdevs_discovered": 3, 00:28:52.962 "num_base_bdevs_operational": 3, 00:28:52.962 "process": { 00:28:52.962 "type": "rebuild", 00:28:52.962 "target": "spare", 00:28:52.962 "progress": { 00:28:52.962 "blocks": 16384, 00:28:52.962 "percent": 25 00:28:52.962 } 00:28:52.962 }, 00:28:52.962 "base_bdevs_list": [ 00:28:52.962 { 00:28:52.962 "name": "spare", 00:28:52.962 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:52.962 "is_configured": true, 00:28:52.962 "data_offset": 2048, 00:28:52.962 "data_size": 63488 00:28:52.962 }, 00:28:52.962 { 00:28:52.962 "name": null, 00:28:52.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.962 "is_configured": false, 00:28:52.962 "data_offset": 0, 00:28:52.962 "data_size": 63488 00:28:52.962 }, 00:28:52.962 { 00:28:52.962 "name": "BaseBdev3", 00:28:52.962 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:52.962 "is_configured": true, 00:28:52.962 "data_offset": 2048, 00:28:52.962 "data_size": 63488 00:28:52.962 }, 00:28:52.962 { 00:28:52.962 "name": "BaseBdev4", 00:28:52.962 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:52.962 "is_configured": true, 00:28:52.962 "data_offset": 2048, 00:28:52.962 "data_size": 63488 00:28:52.962 } 00:28:52.962 ] 00:28:52.962 }' 00:28:52.962 07:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:52.962 07:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.962 07:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:52.962 07:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.962 07:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:53.565 [2024-11-20 07:26:17.678001] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:54.133 95.80 IOPS, 287.40 MiB/s [2024-11-20T07:26:18.422Z] 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:54.133 "name": "raid_bdev1", 00:28:54.133 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:54.133 "strip_size_kb": 0, 00:28:54.133 "state": "online", 00:28:54.133 "raid_level": "raid1", 00:28:54.133 "superblock": true, 00:28:54.133 "num_base_bdevs": 4, 00:28:54.133 "num_base_bdevs_discovered": 3, 00:28:54.133 "num_base_bdevs_operational": 3, 00:28:54.133 "process": { 00:28:54.133 "type": "rebuild", 00:28:54.133 "target": "spare", 00:28:54.133 "progress": { 00:28:54.133 "blocks": 34816, 00:28:54.133 "percent": 54 00:28:54.133 } 00:28:54.133 }, 00:28:54.133 "base_bdevs_list": [ 00:28:54.133 { 00:28:54.133 "name": "spare", 00:28:54.133 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:54.133 "is_configured": true, 00:28:54.133 "data_offset": 2048, 00:28:54.133 "data_size": 63488 00:28:54.133 }, 00:28:54.133 { 00:28:54.133 "name": null, 00:28:54.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.133 "is_configured": false, 00:28:54.133 "data_offset": 0, 00:28:54.133 "data_size": 63488 00:28:54.133 }, 00:28:54.133 { 00:28:54.133 "name": "BaseBdev3", 00:28:54.133 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:54.133 "is_configured": true, 00:28:54.133 "data_offset": 2048, 00:28:54.133 "data_size": 63488 00:28:54.133 }, 00:28:54.133 { 00:28:54.133 "name": "BaseBdev4", 00:28:54.133 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:54.133 "is_configured": true, 00:28:54.133 "data_offset": 2048, 00:28:54.133 "data_size": 63488 00:28:54.133 } 00:28:54.133 ] 00:28:54.133 }' 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.133 07:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:54.392 [2024-11-20 07:26:18.499093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:55.218 89.17 IOPS, 267.50 MiB/s [2024-11-20T07:26:19.507Z] 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:55.218 "name": "raid_bdev1", 00:28:55.218 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:55.218 "strip_size_kb": 0, 00:28:55.218 "state": "online", 00:28:55.218 "raid_level": "raid1", 00:28:55.218 "superblock": true, 00:28:55.218 "num_base_bdevs": 4, 00:28:55.218 "num_base_bdevs_discovered": 3, 00:28:55.218 "num_base_bdevs_operational": 3, 00:28:55.218 "process": { 00:28:55.218 "type": "rebuild", 00:28:55.218 "target": "spare", 00:28:55.218 "progress": { 00:28:55.218 "blocks": 51200, 00:28:55.218 "percent": 80 00:28:55.218 } 00:28:55.218 }, 00:28:55.218 "base_bdevs_list": [ 00:28:55.218 { 00:28:55.218 "name": "spare", 00:28:55.218 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:55.218 "is_configured": true, 00:28:55.218 "data_offset": 2048, 00:28:55.218 "data_size": 63488 00:28:55.218 }, 00:28:55.218 { 00:28:55.218 "name": null, 00:28:55.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.218 "is_configured": false, 00:28:55.218 "data_offset": 0, 00:28:55.218 "data_size": 63488 00:28:55.218 }, 00:28:55.218 { 00:28:55.218 "name": "BaseBdev3", 00:28:55.218 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:55.218 "is_configured": true, 00:28:55.218 "data_offset": 2048, 00:28:55.218 "data_size": 63488 00:28:55.218 }, 00:28:55.218 { 00:28:55.218 "name": "BaseBdev4", 00:28:55.218 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:55.218 "is_configured": true, 00:28:55.218 "data_offset": 2048, 00:28:55.218 "data_size": 63488 00:28:55.218 } 00:28:55.218 ] 00:28:55.218 }' 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:55.218 07:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:55.736 82.00 IOPS, 246.00 MiB/s [2024-11-20T07:26:20.025Z] [2024-11-20 07:26:19.887549] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:55.736 [2024-11-20 07:26:19.987506] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:55.736 [2024-11-20 07:26:19.998117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:56.304 "name": "raid_bdev1", 00:28:56.304 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:56.304 "strip_size_kb": 0, 00:28:56.304 "state": "online", 00:28:56.304 "raid_level": "raid1", 00:28:56.304 "superblock": true, 00:28:56.304 "num_base_bdevs": 4, 00:28:56.304 "num_base_bdevs_discovered": 3, 00:28:56.304 "num_base_bdevs_operational": 3, 00:28:56.304 "base_bdevs_list": [ 00:28:56.304 { 00:28:56.304 "name": "spare", 00:28:56.304 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:56.304 "is_configured": true, 00:28:56.304 "data_offset": 2048, 00:28:56.304 "data_size": 63488 00:28:56.304 }, 00:28:56.304 { 00:28:56.304 "name": null, 00:28:56.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.304 "is_configured": false, 00:28:56.304 "data_offset": 0, 00:28:56.304 "data_size": 63488 00:28:56.304 }, 00:28:56.304 { 00:28:56.304 "name": "BaseBdev3", 00:28:56.304 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:56.304 "is_configured": true, 00:28:56.304 "data_offset": 2048, 00:28:56.304 "data_size": 63488 00:28:56.304 }, 00:28:56.304 { 00:28:56.304 "name": "BaseBdev4", 00:28:56.304 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:56.304 "is_configured": true, 00:28:56.304 "data_offset": 2048, 00:28:56.304 "data_size": 63488 00:28:56.304 } 00:28:56.304 ] 00:28:56.304 }' 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:56.304 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:56.563 "name": "raid_bdev1", 00:28:56.563 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:56.563 "strip_size_kb": 0, 00:28:56.563 "state": "online", 00:28:56.563 "raid_level": "raid1", 00:28:56.563 "superblock": true, 00:28:56.563 "num_base_bdevs": 4, 00:28:56.563 "num_base_bdevs_discovered": 3, 00:28:56.563 "num_base_bdevs_operational": 3, 00:28:56.563 "base_bdevs_list": [ 00:28:56.563 { 00:28:56.563 "name": "spare", 00:28:56.563 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:56.563 "is_configured": true, 00:28:56.563 "data_offset": 2048, 00:28:56.563 "data_size": 63488 00:28:56.563 }, 00:28:56.563 { 00:28:56.563 "name": null, 00:28:56.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.563 "is_configured": false, 00:28:56.563 "data_offset": 0, 00:28:56.563 "data_size": 63488 00:28:56.563 }, 00:28:56.563 { 00:28:56.563 "name": "BaseBdev3", 00:28:56.563 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:56.563 "is_configured": true, 00:28:56.563 "data_offset": 2048, 00:28:56.563 "data_size": 63488 00:28:56.563 }, 00:28:56.563 { 00:28:56.563 "name": "BaseBdev4", 00:28:56.563 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:56.563 "is_configured": true, 00:28:56.563 "data_offset": 2048, 00:28:56.563 "data_size": 63488 00:28:56.563 } 00:28:56.563 ] 00:28:56.563 }' 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:56.563 76.75 IOPS, 230.25 MiB/s [2024-11-20T07:26:20.852Z] 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:56.563 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.822 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:56.822 "name": "raid_bdev1", 00:28:56.822 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:56.823 "strip_size_kb": 0, 00:28:56.823 "state": "online", 00:28:56.823 "raid_level": "raid1", 00:28:56.823 "superblock": true, 00:28:56.823 "num_base_bdevs": 4, 00:28:56.823 "num_base_bdevs_discovered": 3, 00:28:56.823 "num_base_bdevs_operational": 3, 00:28:56.823 "base_bdevs_list": [ 00:28:56.823 { 00:28:56.823 "name": "spare", 00:28:56.823 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:56.823 "is_configured": true, 00:28:56.823 "data_offset": 2048, 00:28:56.823 "data_size": 63488 00:28:56.823 }, 00:28:56.823 { 00:28:56.823 "name": null, 00:28:56.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.823 "is_configured": false, 00:28:56.823 "data_offset": 0, 00:28:56.823 "data_size": 63488 00:28:56.823 }, 00:28:56.823 { 00:28:56.823 "name": "BaseBdev3", 00:28:56.823 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:56.823 "is_configured": true, 00:28:56.823 "data_offset": 2048, 00:28:56.823 "data_size": 63488 00:28:56.823 }, 00:28:56.823 { 00:28:56.823 "name": "BaseBdev4", 00:28:56.823 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:56.823 "is_configured": true, 00:28:56.823 "data_offset": 2048, 00:28:56.823 "data_size": 63488 00:28:56.823 } 00:28:56.823 ] 00:28:56.823 }' 00:28:56.823 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:56.823 07:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.081 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:57.081 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.081 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.081 [2024-11-20 07:26:21.341133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:57.081 [2024-11-20 07:26:21.341168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:57.341 00:28:57.341 Latency(us) 00:28:57.341 [2024-11-20T07:26:21.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.341 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:57.341 raid_bdev1 : 8.74 73.79 221.37 0.00 0.00 18690.96 262.52 122969.37 00:28:57.341 [2024-11-20T07:26:21.630Z] =================================================================================================================== 00:28:57.341 [2024-11-20T07:26:21.630Z] Total : 73.79 221.37 0.00 0.00 18690.96 262.52 122969.37 00:28:57.341 [2024-11-20 07:26:21.459337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.341 [2024-11-20 07:26:21.459394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:57.341 [2024-11-20 07:26:21.459525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:57.341 [2024-11-20 07:26:21.459562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:57.341 { 00:28:57.341 "results": [ 00:28:57.341 { 00:28:57.341 "job": "raid_bdev1", 00:28:57.341 "core_mask": "0x1", 00:28:57.341 "workload": "randrw", 00:28:57.341 "percentage": 50, 00:28:57.341 "status": "finished", 00:28:57.341 "queue_depth": 2, 00:28:57.341 "io_size": 3145728, 00:28:57.341 "runtime": 8.740973, 00:28:57.341 "iops": 73.7904121200237, 00:28:57.341 "mibps": 221.37123636007112, 00:28:57.341 "io_failed": 0, 00:28:57.341 "io_timeout": 0, 00:28:57.341 "avg_latency_us": 18690.955816772374, 00:28:57.341 "min_latency_us": 262.5163636363636, 00:28:57.341 "max_latency_us": 122969.36727272728 00:28:57.341 } 00:28:57.341 ], 00:28:57.341 "core_count": 1 00:28:57.341 } 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:57.341 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:28:57.600 /dev/nbd0 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:57.600 1+0 records in 00:28:57.600 1+0 records out 00:28:57.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586502 s, 7.0 MB/s 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.600 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:57.601 07:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:28:57.860 /dev/nbd1 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:57.860 1+0 records in 00:28:57.860 1+0 records out 00:28:57.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645387 s, 6.3 MB/s 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:57.860 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:58.119 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:58.378 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:58.378 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:58.378 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:58.378 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:58.378 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:58.378 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.379 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:28:58.947 /dev/nbd1 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:58.947 1+0 records in 00:28:58.947 1+0 records out 00:28:58.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429335 s, 9.5 MB/s 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:58.947 07:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:58.947 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:59.225 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.488 [2024-11-20 07:26:23.672922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:59.488 [2024-11-20 07:26:23.673046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.488 [2024-11-20 07:26:23.673077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:59.488 [2024-11-20 07:26:23.673097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.488 [2024-11-20 07:26:23.676105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.488 [2024-11-20 07:26:23.676332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:59.488 [2024-11-20 07:26:23.676462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:59.488 [2024-11-20 07:26:23.676535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:59.488 [2024-11-20 07:26:23.676780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:59.488 [2024-11-20 07:26:23.676933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:59.488 spare 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.488 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.747 [2024-11-20 07:26:23.777212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:59.747 [2024-11-20 07:26:23.777499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:59.747 [2024-11-20 07:26:23.778003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:28:59.747 [2024-11-20 07:26:23.778321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:59.747 [2024-11-20 07:26:23.778339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:59.747 [2024-11-20 07:26:23.778596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:59.747 "name": "raid_bdev1", 00:28:59.747 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:28:59.747 "strip_size_kb": 0, 00:28:59.747 "state": "online", 00:28:59.747 "raid_level": "raid1", 00:28:59.747 "superblock": true, 00:28:59.747 "num_base_bdevs": 4, 00:28:59.747 "num_base_bdevs_discovered": 3, 00:28:59.747 "num_base_bdevs_operational": 3, 00:28:59.747 "base_bdevs_list": [ 00:28:59.747 { 00:28:59.747 "name": "spare", 00:28:59.747 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:28:59.747 "is_configured": true, 00:28:59.747 "data_offset": 2048, 00:28:59.747 "data_size": 63488 00:28:59.747 }, 00:28:59.747 { 00:28:59.747 "name": null, 00:28:59.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.747 "is_configured": false, 00:28:59.747 "data_offset": 2048, 00:28:59.747 "data_size": 63488 00:28:59.747 }, 00:28:59.747 { 00:28:59.747 "name": "BaseBdev3", 00:28:59.747 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:28:59.747 "is_configured": true, 00:28:59.747 "data_offset": 2048, 00:28:59.747 "data_size": 63488 00:28:59.747 }, 00:28:59.747 { 00:28:59.747 "name": "BaseBdev4", 00:28:59.747 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:28:59.747 "is_configured": true, 00:28:59.747 "data_offset": 2048, 00:28:59.747 "data_size": 63488 00:28:59.747 } 00:28:59.747 ] 00:28:59.747 }' 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:59.747 07:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.006 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:00.006 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:00.006 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:00.006 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:00.006 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:00.264 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.264 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.264 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.264 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.264 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.264 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:00.264 "name": "raid_bdev1", 00:29:00.264 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:00.264 "strip_size_kb": 0, 00:29:00.264 "state": "online", 00:29:00.264 "raid_level": "raid1", 00:29:00.265 "superblock": true, 00:29:00.265 "num_base_bdevs": 4, 00:29:00.265 "num_base_bdevs_discovered": 3, 00:29:00.265 "num_base_bdevs_operational": 3, 00:29:00.265 "base_bdevs_list": [ 00:29:00.265 { 00:29:00.265 "name": "spare", 00:29:00.265 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:29:00.265 "is_configured": true, 00:29:00.265 "data_offset": 2048, 00:29:00.265 "data_size": 63488 00:29:00.265 }, 00:29:00.265 { 00:29:00.265 "name": null, 00:29:00.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.265 "is_configured": false, 00:29:00.265 "data_offset": 2048, 00:29:00.265 "data_size": 63488 00:29:00.265 }, 00:29:00.265 { 00:29:00.265 "name": "BaseBdev3", 00:29:00.265 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:00.265 "is_configured": true, 00:29:00.265 "data_offset": 2048, 00:29:00.265 "data_size": 63488 00:29:00.265 }, 00:29:00.265 { 00:29:00.265 "name": "BaseBdev4", 00:29:00.265 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:00.265 "is_configured": true, 00:29:00.265 "data_offset": 2048, 00:29:00.265 "data_size": 63488 00:29:00.265 } 00:29:00.265 ] 00:29:00.265 }' 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.265 [2024-11-20 07:26:24.509523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.265 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.524 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:00.524 "name": "raid_bdev1", 00:29:00.524 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:00.524 "strip_size_kb": 0, 00:29:00.524 "state": "online", 00:29:00.524 "raid_level": "raid1", 00:29:00.524 "superblock": true, 00:29:00.524 "num_base_bdevs": 4, 00:29:00.524 "num_base_bdevs_discovered": 2, 00:29:00.524 "num_base_bdevs_operational": 2, 00:29:00.524 "base_bdevs_list": [ 00:29:00.524 { 00:29:00.524 "name": null, 00:29:00.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.524 "is_configured": false, 00:29:00.524 "data_offset": 0, 00:29:00.524 "data_size": 63488 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "name": null, 00:29:00.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.524 "is_configured": false, 00:29:00.524 "data_offset": 2048, 00:29:00.524 "data_size": 63488 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "name": "BaseBdev3", 00:29:00.524 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:00.524 "is_configured": true, 00:29:00.524 "data_offset": 2048, 00:29:00.524 "data_size": 63488 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "name": "BaseBdev4", 00:29:00.524 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:00.524 "is_configured": true, 00:29:00.524 "data_offset": 2048, 00:29:00.524 "data_size": 63488 00:29:00.524 } 00:29:00.524 ] 00:29:00.524 }' 00:29:00.524 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:00.524 07:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.783 07:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:00.783 07:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.783 07:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.783 [2024-11-20 07:26:25.053874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:00.783 [2024-11-20 07:26:25.054339] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:00.783 [2024-11-20 07:26:25.054527] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:00.783 [2024-11-20 07:26:25.054804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:00.783 [2024-11-20 07:26:25.069320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:29:00.783 07:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.783 07:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:01.041 [2024-11-20 07:26:25.072031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:01.977 "name": "raid_bdev1", 00:29:01.977 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:01.977 "strip_size_kb": 0, 00:29:01.977 "state": "online", 00:29:01.977 "raid_level": "raid1", 00:29:01.977 "superblock": true, 00:29:01.977 "num_base_bdevs": 4, 00:29:01.977 "num_base_bdevs_discovered": 3, 00:29:01.977 "num_base_bdevs_operational": 3, 00:29:01.977 "process": { 00:29:01.977 "type": "rebuild", 00:29:01.977 "target": "spare", 00:29:01.977 "progress": { 00:29:01.977 "blocks": 20480, 00:29:01.977 "percent": 32 00:29:01.977 } 00:29:01.977 }, 00:29:01.977 "base_bdevs_list": [ 00:29:01.977 { 00:29:01.977 "name": "spare", 00:29:01.977 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:29:01.977 "is_configured": true, 00:29:01.977 "data_offset": 2048, 00:29:01.977 "data_size": 63488 00:29:01.977 }, 00:29:01.977 { 00:29:01.977 "name": null, 00:29:01.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.977 "is_configured": false, 00:29:01.977 "data_offset": 2048, 00:29:01.977 "data_size": 63488 00:29:01.977 }, 00:29:01.977 { 00:29:01.977 "name": "BaseBdev3", 00:29:01.977 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:01.977 "is_configured": true, 00:29:01.977 "data_offset": 2048, 00:29:01.977 "data_size": 63488 00:29:01.977 }, 00:29:01.977 { 00:29:01.977 "name": "BaseBdev4", 00:29:01.977 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:01.977 "is_configured": true, 00:29:01.977 "data_offset": 2048, 00:29:01.977 "data_size": 63488 00:29:01.977 } 00:29:01.977 ] 00:29:01.977 }' 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.977 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.977 [2024-11-20 07:26:26.246932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:02.236 [2024-11-20 07:26:26.280914] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:02.236 [2024-11-20 07:26:26.281164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:02.236 [2024-11-20 07:26:26.281193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:02.236 [2024-11-20 07:26:26.281214] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:02.236 "name": "raid_bdev1", 00:29:02.236 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:02.236 "strip_size_kb": 0, 00:29:02.236 "state": "online", 00:29:02.236 "raid_level": "raid1", 00:29:02.236 "superblock": true, 00:29:02.236 "num_base_bdevs": 4, 00:29:02.236 "num_base_bdevs_discovered": 2, 00:29:02.236 "num_base_bdevs_operational": 2, 00:29:02.236 "base_bdevs_list": [ 00:29:02.236 { 00:29:02.236 "name": null, 00:29:02.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.236 "is_configured": false, 00:29:02.236 "data_offset": 0, 00:29:02.236 "data_size": 63488 00:29:02.236 }, 00:29:02.236 { 00:29:02.236 "name": null, 00:29:02.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.236 "is_configured": false, 00:29:02.236 "data_offset": 2048, 00:29:02.236 "data_size": 63488 00:29:02.236 }, 00:29:02.236 { 00:29:02.236 "name": "BaseBdev3", 00:29:02.236 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:02.236 "is_configured": true, 00:29:02.236 "data_offset": 2048, 00:29:02.236 "data_size": 63488 00:29:02.236 }, 00:29:02.236 { 00:29:02.236 "name": "BaseBdev4", 00:29:02.236 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:02.236 "is_configured": true, 00:29:02.236 "data_offset": 2048, 00:29:02.236 "data_size": 63488 00:29:02.236 } 00:29:02.236 ] 00:29:02.236 }' 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:02.236 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.804 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:02.804 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.804 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.804 [2024-11-20 07:26:26.844425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:02.804 [2024-11-20 07:26:26.844513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.804 [2024-11-20 07:26:26.844547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:02.804 [2024-11-20 07:26:26.844563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.804 [2024-11-20 07:26:26.845289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.804 [2024-11-20 07:26:26.845477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:02.804 [2024-11-20 07:26:26.845651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:02.804 [2024-11-20 07:26:26.845691] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:02.804 [2024-11-20 07:26:26.845706] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:02.804 [2024-11-20 07:26:26.845741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:02.804 [2024-11-20 07:26:26.858648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:29:02.804 spare 00:29:02.804 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.804 07:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:02.804 [2024-11-20 07:26:26.861135] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:03.741 "name": "raid_bdev1", 00:29:03.741 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:03.741 "strip_size_kb": 0, 00:29:03.741 "state": "online", 00:29:03.741 "raid_level": "raid1", 00:29:03.741 "superblock": true, 00:29:03.741 "num_base_bdevs": 4, 00:29:03.741 "num_base_bdevs_discovered": 3, 00:29:03.741 "num_base_bdevs_operational": 3, 00:29:03.741 "process": { 00:29:03.741 "type": "rebuild", 00:29:03.741 "target": "spare", 00:29:03.741 "progress": { 00:29:03.741 "blocks": 20480, 00:29:03.741 "percent": 32 00:29:03.741 } 00:29:03.741 }, 00:29:03.741 "base_bdevs_list": [ 00:29:03.741 { 00:29:03.741 "name": "spare", 00:29:03.741 "uuid": "ed3ae191-2f02-542b-a048-195d8e252d23", 00:29:03.741 "is_configured": true, 00:29:03.741 "data_offset": 2048, 00:29:03.741 "data_size": 63488 00:29:03.741 }, 00:29:03.741 { 00:29:03.741 "name": null, 00:29:03.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.741 "is_configured": false, 00:29:03.741 "data_offset": 2048, 00:29:03.741 "data_size": 63488 00:29:03.741 }, 00:29:03.741 { 00:29:03.741 "name": "BaseBdev3", 00:29:03.741 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:03.741 "is_configured": true, 00:29:03.741 "data_offset": 2048, 00:29:03.741 "data_size": 63488 00:29:03.741 }, 00:29:03.741 { 00:29:03.741 "name": "BaseBdev4", 00:29:03.741 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:03.741 "is_configured": true, 00:29:03.741 "data_offset": 2048, 00:29:03.741 "data_size": 63488 00:29:03.741 } 00:29:03.741 ] 00:29:03.741 }' 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:03.741 07:26:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:03.741 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.741 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:03.741 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.741 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.001 [2024-11-20 07:26:28.030528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:04.001 [2024-11-20 07:26:28.069507] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:04.001 [2024-11-20 07:26:28.069585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.001 [2024-11-20 07:26:28.069657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:04.001 [2024-11-20 07:26:28.069669] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:04.001 "name": "raid_bdev1", 00:29:04.001 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:04.001 "strip_size_kb": 0, 00:29:04.001 "state": "online", 00:29:04.001 "raid_level": "raid1", 00:29:04.001 "superblock": true, 00:29:04.001 "num_base_bdevs": 4, 00:29:04.001 "num_base_bdevs_discovered": 2, 00:29:04.001 "num_base_bdevs_operational": 2, 00:29:04.001 "base_bdevs_list": [ 00:29:04.001 { 00:29:04.001 "name": null, 00:29:04.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.001 "is_configured": false, 00:29:04.001 "data_offset": 0, 00:29:04.001 "data_size": 63488 00:29:04.001 }, 00:29:04.001 { 00:29:04.001 "name": null, 00:29:04.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.001 "is_configured": false, 00:29:04.001 "data_offset": 2048, 00:29:04.001 "data_size": 63488 00:29:04.001 }, 00:29:04.001 { 00:29:04.001 "name": "BaseBdev3", 00:29:04.001 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:04.001 "is_configured": true, 00:29:04.001 "data_offset": 2048, 00:29:04.001 "data_size": 63488 00:29:04.001 }, 00:29:04.001 { 00:29:04.001 "name": "BaseBdev4", 00:29:04.001 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:04.001 "is_configured": true, 00:29:04.001 "data_offset": 2048, 00:29:04.001 "data_size": 63488 00:29:04.001 } 00:29:04.001 ] 00:29:04.001 }' 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:04.001 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:04.567 "name": "raid_bdev1", 00:29:04.567 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:04.567 "strip_size_kb": 0, 00:29:04.567 "state": "online", 00:29:04.567 "raid_level": "raid1", 00:29:04.567 "superblock": true, 00:29:04.567 "num_base_bdevs": 4, 00:29:04.567 "num_base_bdevs_discovered": 2, 00:29:04.567 "num_base_bdevs_operational": 2, 00:29:04.567 "base_bdevs_list": [ 00:29:04.567 { 00:29:04.567 "name": null, 00:29:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.567 "is_configured": false, 00:29:04.567 "data_offset": 0, 00:29:04.567 "data_size": 63488 00:29:04.567 }, 00:29:04.567 { 00:29:04.567 "name": null, 00:29:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.567 "is_configured": false, 00:29:04.567 "data_offset": 2048, 00:29:04.567 "data_size": 63488 00:29:04.567 }, 00:29:04.567 { 00:29:04.567 "name": "BaseBdev3", 00:29:04.567 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:04.567 "is_configured": true, 00:29:04.567 "data_offset": 2048, 00:29:04.567 "data_size": 63488 00:29:04.567 }, 00:29:04.567 { 00:29:04.567 "name": "BaseBdev4", 00:29:04.567 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:04.567 "is_configured": true, 00:29:04.567 "data_offset": 2048, 00:29:04.567 "data_size": 63488 00:29:04.567 } 00:29:04.567 ] 00:29:04.567 }' 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.567 [2024-11-20 07:26:28.757344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:04.567 [2024-11-20 07:26:28.757416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.567 [2024-11-20 07:26:28.757445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:29:04.567 [2024-11-20 07:26:28.757458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.567 [2024-11-20 07:26:28.758125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.567 [2024-11-20 07:26:28.758155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:04.567 [2024-11-20 07:26:28.758287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:04.567 [2024-11-20 07:26:28.758323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:04.567 [2024-11-20 07:26:28.758336] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:04.567 [2024-11-20 07:26:28.758348] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:04.567 BaseBdev1 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.567 07:26:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.762 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:05.762 "name": "raid_bdev1", 00:29:05.762 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:05.762 "strip_size_kb": 0, 00:29:05.762 "state": "online", 00:29:05.762 "raid_level": "raid1", 00:29:05.762 "superblock": true, 00:29:05.762 "num_base_bdevs": 4, 00:29:05.762 "num_base_bdevs_discovered": 2, 00:29:05.762 "num_base_bdevs_operational": 2, 00:29:05.762 "base_bdevs_list": [ 00:29:05.762 { 00:29:05.762 "name": null, 00:29:05.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.762 "is_configured": false, 00:29:05.762 "data_offset": 0, 00:29:05.762 "data_size": 63488 00:29:05.762 }, 00:29:05.762 { 00:29:05.762 "name": null, 00:29:05.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.762 "is_configured": false, 00:29:05.762 "data_offset": 2048, 00:29:05.762 "data_size": 63488 00:29:05.762 }, 00:29:05.762 { 00:29:05.762 "name": "BaseBdev3", 00:29:05.762 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:05.762 "is_configured": true, 00:29:05.762 "data_offset": 2048, 00:29:05.762 "data_size": 63488 00:29:05.762 }, 00:29:05.762 { 00:29:05.762 "name": "BaseBdev4", 00:29:05.762 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:05.762 "is_configured": true, 00:29:05.762 "data_offset": 2048, 00:29:05.762 "data_size": 63488 00:29:05.762 } 00:29:05.762 ] 00:29:05.762 }' 00:29:05.762 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:05.762 07:26:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:06.021 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.279 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.279 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:06.279 "name": "raid_bdev1", 00:29:06.279 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:06.279 "strip_size_kb": 0, 00:29:06.279 "state": "online", 00:29:06.279 "raid_level": "raid1", 00:29:06.279 "superblock": true, 00:29:06.280 "num_base_bdevs": 4, 00:29:06.280 "num_base_bdevs_discovered": 2, 00:29:06.280 "num_base_bdevs_operational": 2, 00:29:06.280 "base_bdevs_list": [ 00:29:06.280 { 00:29:06.280 "name": null, 00:29:06.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.280 "is_configured": false, 00:29:06.280 "data_offset": 0, 00:29:06.280 "data_size": 63488 00:29:06.280 }, 00:29:06.280 { 00:29:06.280 "name": null, 00:29:06.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.280 "is_configured": false, 00:29:06.280 "data_offset": 2048, 00:29:06.280 "data_size": 63488 00:29:06.280 }, 00:29:06.280 { 00:29:06.280 "name": "BaseBdev3", 00:29:06.280 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:06.280 "is_configured": true, 00:29:06.280 "data_offset": 2048, 00:29:06.280 "data_size": 63488 00:29:06.280 }, 00:29:06.280 { 00:29:06.280 "name": "BaseBdev4", 00:29:06.280 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:06.280 "is_configured": true, 00:29:06.280 "data_offset": 2048, 00:29:06.280 "data_size": 63488 00:29:06.280 } 00:29:06.280 ] 00:29:06.280 }' 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:06.280 [2024-11-20 07:26:30.470106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:06.280 [2024-11-20 07:26:30.470287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:06.280 [2024-11-20 07:26:30.470308] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:06.280 request: 00:29:06.280 { 00:29:06.280 "base_bdev": "BaseBdev1", 00:29:06.280 "raid_bdev": "raid_bdev1", 00:29:06.280 "method": "bdev_raid_add_base_bdev", 00:29:06.280 "req_id": 1 00:29:06.280 } 00:29:06.280 Got JSON-RPC error response 00:29:06.280 response: 00:29:06.280 { 00:29:06.280 "code": -22, 00:29:06.280 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:06.280 } 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.280 07:26:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:07.215 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.473 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:07.473 "name": "raid_bdev1", 00:29:07.473 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:07.473 "strip_size_kb": 0, 00:29:07.473 "state": "online", 00:29:07.473 "raid_level": "raid1", 00:29:07.473 "superblock": true, 00:29:07.473 "num_base_bdevs": 4, 00:29:07.473 "num_base_bdevs_discovered": 2, 00:29:07.473 "num_base_bdevs_operational": 2, 00:29:07.473 "base_bdevs_list": [ 00:29:07.473 { 00:29:07.473 "name": null, 00:29:07.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.474 "is_configured": false, 00:29:07.474 "data_offset": 0, 00:29:07.474 "data_size": 63488 00:29:07.474 }, 00:29:07.474 { 00:29:07.474 "name": null, 00:29:07.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.474 "is_configured": false, 00:29:07.474 "data_offset": 2048, 00:29:07.474 "data_size": 63488 00:29:07.474 }, 00:29:07.474 { 00:29:07.474 "name": "BaseBdev3", 00:29:07.474 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:07.474 "is_configured": true, 00:29:07.474 "data_offset": 2048, 00:29:07.474 "data_size": 63488 00:29:07.474 }, 00:29:07.474 { 00:29:07.474 "name": "BaseBdev4", 00:29:07.474 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:07.474 "is_configured": true, 00:29:07.474 "data_offset": 2048, 00:29:07.474 "data_size": 63488 00:29:07.474 } 00:29:07.474 ] 00:29:07.474 }' 00:29:07.474 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:07.474 07:26:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:07.732 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.732 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:07.732 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:07.732 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:07.732 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:07.991 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.991 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.991 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.991 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:07.991 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.991 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:07.991 "name": "raid_bdev1", 00:29:07.991 "uuid": "9627d6af-c945-43ab-bb0f-6f70cf33c3e7", 00:29:07.991 "strip_size_kb": 0, 00:29:07.991 "state": "online", 00:29:07.992 "raid_level": "raid1", 00:29:07.992 "superblock": true, 00:29:07.992 "num_base_bdevs": 4, 00:29:07.992 "num_base_bdevs_discovered": 2, 00:29:07.992 "num_base_bdevs_operational": 2, 00:29:07.992 "base_bdevs_list": [ 00:29:07.992 { 00:29:07.992 "name": null, 00:29:07.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.992 "is_configured": false, 00:29:07.992 "data_offset": 0, 00:29:07.992 "data_size": 63488 00:29:07.992 }, 00:29:07.992 { 00:29:07.992 "name": null, 00:29:07.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.992 "is_configured": false, 00:29:07.992 "data_offset": 2048, 00:29:07.992 "data_size": 63488 00:29:07.992 }, 00:29:07.992 { 00:29:07.992 "name": "BaseBdev3", 00:29:07.992 "uuid": "2700d212-fddd-51f2-b225-4215839bd249", 00:29:07.992 "is_configured": true, 00:29:07.992 "data_offset": 2048, 00:29:07.992 "data_size": 63488 00:29:07.992 }, 00:29:07.992 { 00:29:07.992 "name": "BaseBdev4", 00:29:07.992 "uuid": "46f4fac9-cc14-5c81-a945-ee29168fcc41", 00:29:07.992 "is_configured": true, 00:29:07.992 "data_offset": 2048, 00:29:07.992 "data_size": 63488 00:29:07.992 } 00:29:07.992 ] 00:29:07.992 }' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79690 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79690 ']' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79690 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79690 00:29:07.992 killing process with pid 79690 00:29:07.992 Received shutdown signal, test time was about 19.520100 seconds 00:29:07.992 00:29:07.992 Latency(us) 00:29:07.992 [2024-11-20T07:26:32.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.992 [2024-11-20T07:26:32.281Z] =================================================================================================================== 00:29:07.992 [2024-11-20T07:26:32.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79690' 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79690 00:29:07.992 [2024-11-20 07:26:32.220171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:07.992 07:26:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79690 00:29:07.992 [2024-11-20 07:26:32.220324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:07.992 [2024-11-20 07:26:32.220406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:07.992 [2024-11-20 07:26:32.220424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:08.251 [2024-11-20 07:26:32.523863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:09.187 ************************************ 00:29:09.187 END TEST raid_rebuild_test_sb_io 00:29:09.187 ************************************ 00:29:09.187 07:26:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:09.187 00:29:09.187 real 0m22.952s 00:29:09.187 user 0m31.396s 00:29:09.187 sys 0m2.356s 00:29:09.187 07:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.187 07:26:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:09.446 07:26:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:29:09.446 07:26:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:29:09.446 07:26:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:09.446 07:26:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.446 07:26:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:09.446 ************************************ 00:29:09.446 START TEST raid5f_state_function_test 00:29:09.446 ************************************ 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80423 00:29:09.446 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80423' 00:29:09.447 Process raid pid: 80423 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80423 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80423 ']' 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.447 07:26:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.447 [2024-11-20 07:26:33.649069] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:09.447 [2024-11-20 07:26:33.649517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.706 [2024-11-20 07:26:33.832236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.706 [2024-11-20 07:26:33.948765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.965 [2024-11-20 07:26:34.135151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:09.965 [2024-11-20 07:26:34.135191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.534 [2024-11-20 07:26:34.620424] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:10.534 [2024-11-20 07:26:34.620496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:10.534 [2024-11-20 07:26:34.620513] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:10.534 [2024-11-20 07:26:34.620527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:10.534 [2024-11-20 07:26:34.620536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:10.534 [2024-11-20 07:26:34.620547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:10.534 "name": "Existed_Raid", 00:29:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.534 "strip_size_kb": 64, 00:29:10.534 "state": "configuring", 00:29:10.534 "raid_level": "raid5f", 00:29:10.534 "superblock": false, 00:29:10.534 "num_base_bdevs": 3, 00:29:10.534 "num_base_bdevs_discovered": 0, 00:29:10.534 "num_base_bdevs_operational": 3, 00:29:10.534 "base_bdevs_list": [ 00:29:10.534 { 00:29:10.534 "name": "BaseBdev1", 00:29:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.534 "is_configured": false, 00:29:10.534 "data_offset": 0, 00:29:10.534 "data_size": 0 00:29:10.534 }, 00:29:10.534 { 00:29:10.534 "name": "BaseBdev2", 00:29:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.534 "is_configured": false, 00:29:10.534 "data_offset": 0, 00:29:10.534 "data_size": 0 00:29:10.534 }, 00:29:10.534 { 00:29:10.534 "name": "BaseBdev3", 00:29:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.534 "is_configured": false, 00:29:10.534 "data_offset": 0, 00:29:10.534 "data_size": 0 00:29:10.534 } 00:29:10.534 ] 00:29:10.534 }' 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:10.534 07:26:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.102 [2024-11-20 07:26:35.128561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:11.102 [2024-11-20 07:26:35.128631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.102 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.102 [2024-11-20 07:26:35.140547] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:11.103 [2024-11-20 07:26:35.140806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:11.103 [2024-11-20 07:26:35.140933] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:11.103 [2024-11-20 07:26:35.141007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:11.103 [2024-11-20 07:26:35.141228] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:11.103 [2024-11-20 07:26:35.141283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.103 [2024-11-20 07:26:35.186077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:11.103 BaseBdev1 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.103 [ 00:29:11.103 { 00:29:11.103 "name": "BaseBdev1", 00:29:11.103 "aliases": [ 00:29:11.103 "740c6c3f-c88e-4440-a3cd-da28e835f462" 00:29:11.103 ], 00:29:11.103 "product_name": "Malloc disk", 00:29:11.103 "block_size": 512, 00:29:11.103 "num_blocks": 65536, 00:29:11.103 "uuid": "740c6c3f-c88e-4440-a3cd-da28e835f462", 00:29:11.103 "assigned_rate_limits": { 00:29:11.103 "rw_ios_per_sec": 0, 00:29:11.103 "rw_mbytes_per_sec": 0, 00:29:11.103 "r_mbytes_per_sec": 0, 00:29:11.103 "w_mbytes_per_sec": 0 00:29:11.103 }, 00:29:11.103 "claimed": true, 00:29:11.103 "claim_type": "exclusive_write", 00:29:11.103 "zoned": false, 00:29:11.103 "supported_io_types": { 00:29:11.103 "read": true, 00:29:11.103 "write": true, 00:29:11.103 "unmap": true, 00:29:11.103 "flush": true, 00:29:11.103 "reset": true, 00:29:11.103 "nvme_admin": false, 00:29:11.103 "nvme_io": false, 00:29:11.103 "nvme_io_md": false, 00:29:11.103 "write_zeroes": true, 00:29:11.103 "zcopy": true, 00:29:11.103 "get_zone_info": false, 00:29:11.103 "zone_management": false, 00:29:11.103 "zone_append": false, 00:29:11.103 "compare": false, 00:29:11.103 "compare_and_write": false, 00:29:11.103 "abort": true, 00:29:11.103 "seek_hole": false, 00:29:11.103 "seek_data": false, 00:29:11.103 "copy": true, 00:29:11.103 "nvme_iov_md": false 00:29:11.103 }, 00:29:11.103 "memory_domains": [ 00:29:11.103 { 00:29:11.103 "dma_device_id": "system", 00:29:11.103 "dma_device_type": 1 00:29:11.103 }, 00:29:11.103 { 00:29:11.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:11.103 "dma_device_type": 2 00:29:11.103 } 00:29:11.103 ], 00:29:11.103 "driver_specific": {} 00:29:11.103 } 00:29:11.103 ] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:11.103 "name": "Existed_Raid", 00:29:11.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.103 "strip_size_kb": 64, 00:29:11.103 "state": "configuring", 00:29:11.103 "raid_level": "raid5f", 00:29:11.103 "superblock": false, 00:29:11.103 "num_base_bdevs": 3, 00:29:11.103 "num_base_bdevs_discovered": 1, 00:29:11.103 "num_base_bdevs_operational": 3, 00:29:11.103 "base_bdevs_list": [ 00:29:11.103 { 00:29:11.103 "name": "BaseBdev1", 00:29:11.103 "uuid": "740c6c3f-c88e-4440-a3cd-da28e835f462", 00:29:11.103 "is_configured": true, 00:29:11.103 "data_offset": 0, 00:29:11.103 "data_size": 65536 00:29:11.103 }, 00:29:11.103 { 00:29:11.103 "name": "BaseBdev2", 00:29:11.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.103 "is_configured": false, 00:29:11.103 "data_offset": 0, 00:29:11.103 "data_size": 0 00:29:11.103 }, 00:29:11.103 { 00:29:11.103 "name": "BaseBdev3", 00:29:11.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.103 "is_configured": false, 00:29:11.103 "data_offset": 0, 00:29:11.103 "data_size": 0 00:29:11.103 } 00:29:11.103 ] 00:29:11.103 }' 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:11.103 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.671 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:11.671 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.671 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.671 [2024-11-20 07:26:35.718294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:11.671 [2024-11-20 07:26:35.718348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:11.671 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.671 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:11.671 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.672 [2024-11-20 07:26:35.726352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:11.672 [2024-11-20 07:26:35.728812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:11.672 [2024-11-20 07:26:35.728876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:11.672 [2024-11-20 07:26:35.728892] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:11.672 [2024-11-20 07:26:35.728906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:11.672 "name": "Existed_Raid", 00:29:11.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.672 "strip_size_kb": 64, 00:29:11.672 "state": "configuring", 00:29:11.672 "raid_level": "raid5f", 00:29:11.672 "superblock": false, 00:29:11.672 "num_base_bdevs": 3, 00:29:11.672 "num_base_bdevs_discovered": 1, 00:29:11.672 "num_base_bdevs_operational": 3, 00:29:11.672 "base_bdevs_list": [ 00:29:11.672 { 00:29:11.672 "name": "BaseBdev1", 00:29:11.672 "uuid": "740c6c3f-c88e-4440-a3cd-da28e835f462", 00:29:11.672 "is_configured": true, 00:29:11.672 "data_offset": 0, 00:29:11.672 "data_size": 65536 00:29:11.672 }, 00:29:11.672 { 00:29:11.672 "name": "BaseBdev2", 00:29:11.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.672 "is_configured": false, 00:29:11.672 "data_offset": 0, 00:29:11.672 "data_size": 0 00:29:11.672 }, 00:29:11.672 { 00:29:11.672 "name": "BaseBdev3", 00:29:11.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.672 "is_configured": false, 00:29:11.672 "data_offset": 0, 00:29:11.672 "data_size": 0 00:29:11.672 } 00:29:11.672 ] 00:29:11.672 }' 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:11.672 07:26:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.240 [2024-11-20 07:26:36.292852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:12.240 BaseBdev2 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.240 [ 00:29:12.240 { 00:29:12.240 "name": "BaseBdev2", 00:29:12.240 "aliases": [ 00:29:12.240 "432c1969-0343-4dcb-8878-821abc78ff15" 00:29:12.240 ], 00:29:12.240 "product_name": "Malloc disk", 00:29:12.240 "block_size": 512, 00:29:12.240 "num_blocks": 65536, 00:29:12.240 "uuid": "432c1969-0343-4dcb-8878-821abc78ff15", 00:29:12.240 "assigned_rate_limits": { 00:29:12.240 "rw_ios_per_sec": 0, 00:29:12.240 "rw_mbytes_per_sec": 0, 00:29:12.240 "r_mbytes_per_sec": 0, 00:29:12.240 "w_mbytes_per_sec": 0 00:29:12.240 }, 00:29:12.240 "claimed": true, 00:29:12.240 "claim_type": "exclusive_write", 00:29:12.240 "zoned": false, 00:29:12.240 "supported_io_types": { 00:29:12.240 "read": true, 00:29:12.240 "write": true, 00:29:12.240 "unmap": true, 00:29:12.240 "flush": true, 00:29:12.240 "reset": true, 00:29:12.240 "nvme_admin": false, 00:29:12.240 "nvme_io": false, 00:29:12.240 "nvme_io_md": false, 00:29:12.240 "write_zeroes": true, 00:29:12.240 "zcopy": true, 00:29:12.240 "get_zone_info": false, 00:29:12.240 "zone_management": false, 00:29:12.240 "zone_append": false, 00:29:12.240 "compare": false, 00:29:12.240 "compare_and_write": false, 00:29:12.240 "abort": true, 00:29:12.240 "seek_hole": false, 00:29:12.240 "seek_data": false, 00:29:12.240 "copy": true, 00:29:12.240 "nvme_iov_md": false 00:29:12.240 }, 00:29:12.240 "memory_domains": [ 00:29:12.240 { 00:29:12.240 "dma_device_id": "system", 00:29:12.240 "dma_device_type": 1 00:29:12.240 }, 00:29:12.240 { 00:29:12.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.240 "dma_device_type": 2 00:29:12.240 } 00:29:12.240 ], 00:29:12.240 "driver_specific": {} 00:29:12.240 } 00:29:12.240 ] 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:12.240 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:12.241 "name": "Existed_Raid", 00:29:12.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.241 "strip_size_kb": 64, 00:29:12.241 "state": "configuring", 00:29:12.241 "raid_level": "raid5f", 00:29:12.241 "superblock": false, 00:29:12.241 "num_base_bdevs": 3, 00:29:12.241 "num_base_bdevs_discovered": 2, 00:29:12.241 "num_base_bdevs_operational": 3, 00:29:12.241 "base_bdevs_list": [ 00:29:12.241 { 00:29:12.241 "name": "BaseBdev1", 00:29:12.241 "uuid": "740c6c3f-c88e-4440-a3cd-da28e835f462", 00:29:12.241 "is_configured": true, 00:29:12.241 "data_offset": 0, 00:29:12.241 "data_size": 65536 00:29:12.241 }, 00:29:12.241 { 00:29:12.241 "name": "BaseBdev2", 00:29:12.241 "uuid": "432c1969-0343-4dcb-8878-821abc78ff15", 00:29:12.241 "is_configured": true, 00:29:12.241 "data_offset": 0, 00:29:12.241 "data_size": 65536 00:29:12.241 }, 00:29:12.241 { 00:29:12.241 "name": "BaseBdev3", 00:29:12.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.241 "is_configured": false, 00:29:12.241 "data_offset": 0, 00:29:12.241 "data_size": 0 00:29:12.241 } 00:29:12.241 ] 00:29:12.241 }' 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:12.241 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:12.809 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.809 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 [2024-11-20 07:26:36.893460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:12.809 [2024-11-20 07:26:36.893767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:12.809 [2024-11-20 07:26:36.893797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:12.810 [2024-11-20 07:26:36.894141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:12.810 [2024-11-20 07:26:36.898704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:12.810 [2024-11-20 07:26:36.898727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:12.810 [2024-11-20 07:26:36.899069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:12.810 BaseBdev3 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.810 [ 00:29:12.810 { 00:29:12.810 "name": "BaseBdev3", 00:29:12.810 "aliases": [ 00:29:12.810 "8aa427d3-1116-4545-9047-9e8fe5e65661" 00:29:12.810 ], 00:29:12.810 "product_name": "Malloc disk", 00:29:12.810 "block_size": 512, 00:29:12.810 "num_blocks": 65536, 00:29:12.810 "uuid": "8aa427d3-1116-4545-9047-9e8fe5e65661", 00:29:12.810 "assigned_rate_limits": { 00:29:12.810 "rw_ios_per_sec": 0, 00:29:12.810 "rw_mbytes_per_sec": 0, 00:29:12.810 "r_mbytes_per_sec": 0, 00:29:12.810 "w_mbytes_per_sec": 0 00:29:12.810 }, 00:29:12.810 "claimed": true, 00:29:12.810 "claim_type": "exclusive_write", 00:29:12.810 "zoned": false, 00:29:12.810 "supported_io_types": { 00:29:12.810 "read": true, 00:29:12.810 "write": true, 00:29:12.810 "unmap": true, 00:29:12.810 "flush": true, 00:29:12.810 "reset": true, 00:29:12.810 "nvme_admin": false, 00:29:12.810 "nvme_io": false, 00:29:12.810 "nvme_io_md": false, 00:29:12.810 "write_zeroes": true, 00:29:12.810 "zcopy": true, 00:29:12.810 "get_zone_info": false, 00:29:12.810 "zone_management": false, 00:29:12.810 "zone_append": false, 00:29:12.810 "compare": false, 00:29:12.810 "compare_and_write": false, 00:29:12.810 "abort": true, 00:29:12.810 "seek_hole": false, 00:29:12.810 "seek_data": false, 00:29:12.810 "copy": true, 00:29:12.810 "nvme_iov_md": false 00:29:12.810 }, 00:29:12.810 "memory_domains": [ 00:29:12.810 { 00:29:12.810 "dma_device_id": "system", 00:29:12.810 "dma_device_type": 1 00:29:12.810 }, 00:29:12.810 { 00:29:12.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.810 "dma_device_type": 2 00:29:12.810 } 00:29:12.810 ], 00:29:12.810 "driver_specific": {} 00:29:12.810 } 00:29:12.810 ] 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:12.810 "name": "Existed_Raid", 00:29:12.810 "uuid": "b3fb56ee-0373-4a5d-9331-c30a51a97b9d", 00:29:12.810 "strip_size_kb": 64, 00:29:12.810 "state": "online", 00:29:12.810 "raid_level": "raid5f", 00:29:12.810 "superblock": false, 00:29:12.810 "num_base_bdevs": 3, 00:29:12.810 "num_base_bdevs_discovered": 3, 00:29:12.810 "num_base_bdevs_operational": 3, 00:29:12.810 "base_bdevs_list": [ 00:29:12.810 { 00:29:12.810 "name": "BaseBdev1", 00:29:12.810 "uuid": "740c6c3f-c88e-4440-a3cd-da28e835f462", 00:29:12.810 "is_configured": true, 00:29:12.810 "data_offset": 0, 00:29:12.810 "data_size": 65536 00:29:12.810 }, 00:29:12.810 { 00:29:12.810 "name": "BaseBdev2", 00:29:12.810 "uuid": "432c1969-0343-4dcb-8878-821abc78ff15", 00:29:12.810 "is_configured": true, 00:29:12.810 "data_offset": 0, 00:29:12.810 "data_size": 65536 00:29:12.810 }, 00:29:12.810 { 00:29:12.810 "name": "BaseBdev3", 00:29:12.810 "uuid": "8aa427d3-1116-4545-9047-9e8fe5e65661", 00:29:12.810 "is_configured": true, 00:29:12.810 "data_offset": 0, 00:29:12.810 "data_size": 65536 00:29:12.810 } 00:29:12.810 ] 00:29:12.810 }' 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:12.810 07:26:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.378 [2024-11-20 07:26:37.472809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.378 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:13.378 "name": "Existed_Raid", 00:29:13.378 "aliases": [ 00:29:13.378 "b3fb56ee-0373-4a5d-9331-c30a51a97b9d" 00:29:13.378 ], 00:29:13.378 "product_name": "Raid Volume", 00:29:13.378 "block_size": 512, 00:29:13.378 "num_blocks": 131072, 00:29:13.379 "uuid": "b3fb56ee-0373-4a5d-9331-c30a51a97b9d", 00:29:13.379 "assigned_rate_limits": { 00:29:13.379 "rw_ios_per_sec": 0, 00:29:13.379 "rw_mbytes_per_sec": 0, 00:29:13.379 "r_mbytes_per_sec": 0, 00:29:13.379 "w_mbytes_per_sec": 0 00:29:13.379 }, 00:29:13.379 "claimed": false, 00:29:13.379 "zoned": false, 00:29:13.379 "supported_io_types": { 00:29:13.379 "read": true, 00:29:13.379 "write": true, 00:29:13.379 "unmap": false, 00:29:13.379 "flush": false, 00:29:13.379 "reset": true, 00:29:13.379 "nvme_admin": false, 00:29:13.379 "nvme_io": false, 00:29:13.379 "nvme_io_md": false, 00:29:13.379 "write_zeroes": true, 00:29:13.379 "zcopy": false, 00:29:13.379 "get_zone_info": false, 00:29:13.379 "zone_management": false, 00:29:13.379 "zone_append": false, 00:29:13.379 "compare": false, 00:29:13.379 "compare_and_write": false, 00:29:13.379 "abort": false, 00:29:13.379 "seek_hole": false, 00:29:13.379 "seek_data": false, 00:29:13.379 "copy": false, 00:29:13.379 "nvme_iov_md": false 00:29:13.379 }, 00:29:13.379 "driver_specific": { 00:29:13.379 "raid": { 00:29:13.379 "uuid": "b3fb56ee-0373-4a5d-9331-c30a51a97b9d", 00:29:13.379 "strip_size_kb": 64, 00:29:13.379 "state": "online", 00:29:13.379 "raid_level": "raid5f", 00:29:13.379 "superblock": false, 00:29:13.379 "num_base_bdevs": 3, 00:29:13.379 "num_base_bdevs_discovered": 3, 00:29:13.379 "num_base_bdevs_operational": 3, 00:29:13.379 "base_bdevs_list": [ 00:29:13.379 { 00:29:13.379 "name": "BaseBdev1", 00:29:13.379 "uuid": "740c6c3f-c88e-4440-a3cd-da28e835f462", 00:29:13.379 "is_configured": true, 00:29:13.379 "data_offset": 0, 00:29:13.379 "data_size": 65536 00:29:13.379 }, 00:29:13.379 { 00:29:13.379 "name": "BaseBdev2", 00:29:13.379 "uuid": "432c1969-0343-4dcb-8878-821abc78ff15", 00:29:13.379 "is_configured": true, 00:29:13.379 "data_offset": 0, 00:29:13.379 "data_size": 65536 00:29:13.379 }, 00:29:13.379 { 00:29:13.379 "name": "BaseBdev3", 00:29:13.379 "uuid": "8aa427d3-1116-4545-9047-9e8fe5e65661", 00:29:13.379 "is_configured": true, 00:29:13.379 "data_offset": 0, 00:29:13.379 "data_size": 65536 00:29:13.379 } 00:29:13.379 ] 00:29:13.379 } 00:29:13.379 } 00:29:13.379 }' 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:13.379 BaseBdev2 00:29:13.379 BaseBdev3' 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.379 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.638 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:13.638 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.639 [2024-11-20 07:26:37.792610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:13.639 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.898 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.898 "name": "Existed_Raid", 00:29:13.898 "uuid": "b3fb56ee-0373-4a5d-9331-c30a51a97b9d", 00:29:13.898 "strip_size_kb": 64, 00:29:13.898 "state": "online", 00:29:13.899 "raid_level": "raid5f", 00:29:13.899 "superblock": false, 00:29:13.899 "num_base_bdevs": 3, 00:29:13.899 "num_base_bdevs_discovered": 2, 00:29:13.899 "num_base_bdevs_operational": 2, 00:29:13.899 "base_bdevs_list": [ 00:29:13.899 { 00:29:13.899 "name": null, 00:29:13.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.899 "is_configured": false, 00:29:13.899 "data_offset": 0, 00:29:13.899 "data_size": 65536 00:29:13.899 }, 00:29:13.899 { 00:29:13.899 "name": "BaseBdev2", 00:29:13.899 "uuid": "432c1969-0343-4dcb-8878-821abc78ff15", 00:29:13.899 "is_configured": true, 00:29:13.899 "data_offset": 0, 00:29:13.899 "data_size": 65536 00:29:13.899 }, 00:29:13.899 { 00:29:13.899 "name": "BaseBdev3", 00:29:13.899 "uuid": "8aa427d3-1116-4545-9047-9e8fe5e65661", 00:29:13.899 "is_configured": true, 00:29:13.899 "data_offset": 0, 00:29:13.899 "data_size": 65536 00:29:13.899 } 00:29:13.899 ] 00:29:13.899 }' 00:29:13.899 07:26:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.899 07:26:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:14.157 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.416 [2024-11-20 07:26:38.453276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:14.416 [2024-11-20 07:26:38.453384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:14.416 [2024-11-20 07:26:38.524867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.416 [2024-11-20 07:26:38.584931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:14.416 [2024-11-20 07:26:38.585175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.416 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.679 BaseBdev2 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.679 [ 00:29:14.679 { 00:29:14.679 "name": "BaseBdev2", 00:29:14.679 "aliases": [ 00:29:14.679 "ee131518-3fc4-471a-a3a3-04bb44f9b3b5" 00:29:14.679 ], 00:29:14.679 "product_name": "Malloc disk", 00:29:14.679 "block_size": 512, 00:29:14.679 "num_blocks": 65536, 00:29:14.679 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:14.679 "assigned_rate_limits": { 00:29:14.679 "rw_ios_per_sec": 0, 00:29:14.679 "rw_mbytes_per_sec": 0, 00:29:14.679 "r_mbytes_per_sec": 0, 00:29:14.679 "w_mbytes_per_sec": 0 00:29:14.679 }, 00:29:14.679 "claimed": false, 00:29:14.679 "zoned": false, 00:29:14.679 "supported_io_types": { 00:29:14.679 "read": true, 00:29:14.679 "write": true, 00:29:14.679 "unmap": true, 00:29:14.679 "flush": true, 00:29:14.679 "reset": true, 00:29:14.679 "nvme_admin": false, 00:29:14.679 "nvme_io": false, 00:29:14.679 "nvme_io_md": false, 00:29:14.679 "write_zeroes": true, 00:29:14.679 "zcopy": true, 00:29:14.679 "get_zone_info": false, 00:29:14.679 "zone_management": false, 00:29:14.679 "zone_append": false, 00:29:14.679 "compare": false, 00:29:14.679 "compare_and_write": false, 00:29:14.679 "abort": true, 00:29:14.679 "seek_hole": false, 00:29:14.679 "seek_data": false, 00:29:14.679 "copy": true, 00:29:14.679 "nvme_iov_md": false 00:29:14.679 }, 00:29:14.679 "memory_domains": [ 00:29:14.679 { 00:29:14.679 "dma_device_id": "system", 00:29:14.679 "dma_device_type": 1 00:29:14.679 }, 00:29:14.679 { 00:29:14.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:14.679 "dma_device_type": 2 00:29:14.679 } 00:29:14.679 ], 00:29:14.679 "driver_specific": {} 00:29:14.679 } 00:29:14.679 ] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.679 BaseBdev3 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:14.679 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.680 [ 00:29:14.680 { 00:29:14.680 "name": "BaseBdev3", 00:29:14.680 "aliases": [ 00:29:14.680 "d4a6e2e1-47df-4d62-bfd0-fa53024114c8" 00:29:14.680 ], 00:29:14.680 "product_name": "Malloc disk", 00:29:14.680 "block_size": 512, 00:29:14.680 "num_blocks": 65536, 00:29:14.680 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:14.680 "assigned_rate_limits": { 00:29:14.680 "rw_ios_per_sec": 0, 00:29:14.680 "rw_mbytes_per_sec": 0, 00:29:14.680 "r_mbytes_per_sec": 0, 00:29:14.680 "w_mbytes_per_sec": 0 00:29:14.680 }, 00:29:14.680 "claimed": false, 00:29:14.680 "zoned": false, 00:29:14.680 "supported_io_types": { 00:29:14.680 "read": true, 00:29:14.680 "write": true, 00:29:14.680 "unmap": true, 00:29:14.680 "flush": true, 00:29:14.680 "reset": true, 00:29:14.680 "nvme_admin": false, 00:29:14.680 "nvme_io": false, 00:29:14.680 "nvme_io_md": false, 00:29:14.680 "write_zeroes": true, 00:29:14.680 "zcopy": true, 00:29:14.680 "get_zone_info": false, 00:29:14.680 "zone_management": false, 00:29:14.680 "zone_append": false, 00:29:14.680 "compare": false, 00:29:14.680 "compare_and_write": false, 00:29:14.680 "abort": true, 00:29:14.680 "seek_hole": false, 00:29:14.680 "seek_data": false, 00:29:14.680 "copy": true, 00:29:14.680 "nvme_iov_md": false 00:29:14.680 }, 00:29:14.680 "memory_domains": [ 00:29:14.680 { 00:29:14.680 "dma_device_id": "system", 00:29:14.680 "dma_device_type": 1 00:29:14.680 }, 00:29:14.680 { 00:29:14.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:14.680 "dma_device_type": 2 00:29:14.680 } 00:29:14.680 ], 00:29:14.680 "driver_specific": {} 00:29:14.680 } 00:29:14.680 ] 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.680 [2024-11-20 07:26:38.862494] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:14.680 [2024-11-20 07:26:38.862724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:14.680 [2024-11-20 07:26:38.862853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:14.680 [2024-11-20 07:26:38.865202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.680 "name": "Existed_Raid", 00:29:14.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.680 "strip_size_kb": 64, 00:29:14.680 "state": "configuring", 00:29:14.680 "raid_level": "raid5f", 00:29:14.680 "superblock": false, 00:29:14.680 "num_base_bdevs": 3, 00:29:14.680 "num_base_bdevs_discovered": 2, 00:29:14.680 "num_base_bdevs_operational": 3, 00:29:14.680 "base_bdevs_list": [ 00:29:14.680 { 00:29:14.680 "name": "BaseBdev1", 00:29:14.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.680 "is_configured": false, 00:29:14.680 "data_offset": 0, 00:29:14.680 "data_size": 0 00:29:14.680 }, 00:29:14.680 { 00:29:14.680 "name": "BaseBdev2", 00:29:14.680 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:14.680 "is_configured": true, 00:29:14.680 "data_offset": 0, 00:29:14.680 "data_size": 65536 00:29:14.680 }, 00:29:14.680 { 00:29:14.680 "name": "BaseBdev3", 00:29:14.680 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:14.680 "is_configured": true, 00:29:14.680 "data_offset": 0, 00:29:14.680 "data_size": 65536 00:29:14.680 } 00:29:14.680 ] 00:29:14.680 }' 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.680 07:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.247 [2024-11-20 07:26:39.390699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.247 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:15.248 "name": "Existed_Raid", 00:29:15.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.248 "strip_size_kb": 64, 00:29:15.248 "state": "configuring", 00:29:15.248 "raid_level": "raid5f", 00:29:15.248 "superblock": false, 00:29:15.248 "num_base_bdevs": 3, 00:29:15.248 "num_base_bdevs_discovered": 1, 00:29:15.248 "num_base_bdevs_operational": 3, 00:29:15.248 "base_bdevs_list": [ 00:29:15.248 { 00:29:15.248 "name": "BaseBdev1", 00:29:15.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.248 "is_configured": false, 00:29:15.248 "data_offset": 0, 00:29:15.248 "data_size": 0 00:29:15.248 }, 00:29:15.248 { 00:29:15.248 "name": null, 00:29:15.248 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:15.248 "is_configured": false, 00:29:15.248 "data_offset": 0, 00:29:15.248 "data_size": 65536 00:29:15.248 }, 00:29:15.248 { 00:29:15.248 "name": "BaseBdev3", 00:29:15.248 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:15.248 "is_configured": true, 00:29:15.248 "data_offset": 0, 00:29:15.248 "data_size": 65536 00:29:15.248 } 00:29:15.248 ] 00:29:15.248 }' 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:15.248 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.816 07:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.816 [2024-11-20 07:26:40.015319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:15.816 BaseBdev1 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.816 [ 00:29:15.816 { 00:29:15.816 "name": "BaseBdev1", 00:29:15.816 "aliases": [ 00:29:15.816 "6a63090b-a8b8-4d35-90e1-bcee81b5c929" 00:29:15.816 ], 00:29:15.816 "product_name": "Malloc disk", 00:29:15.816 "block_size": 512, 00:29:15.816 "num_blocks": 65536, 00:29:15.816 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:15.816 "assigned_rate_limits": { 00:29:15.816 "rw_ios_per_sec": 0, 00:29:15.816 "rw_mbytes_per_sec": 0, 00:29:15.816 "r_mbytes_per_sec": 0, 00:29:15.816 "w_mbytes_per_sec": 0 00:29:15.816 }, 00:29:15.816 "claimed": true, 00:29:15.816 "claim_type": "exclusive_write", 00:29:15.816 "zoned": false, 00:29:15.816 "supported_io_types": { 00:29:15.816 "read": true, 00:29:15.816 "write": true, 00:29:15.816 "unmap": true, 00:29:15.816 "flush": true, 00:29:15.816 "reset": true, 00:29:15.816 "nvme_admin": false, 00:29:15.816 "nvme_io": false, 00:29:15.816 "nvme_io_md": false, 00:29:15.816 "write_zeroes": true, 00:29:15.816 "zcopy": true, 00:29:15.816 "get_zone_info": false, 00:29:15.816 "zone_management": false, 00:29:15.816 "zone_append": false, 00:29:15.816 "compare": false, 00:29:15.816 "compare_and_write": false, 00:29:15.816 "abort": true, 00:29:15.816 "seek_hole": false, 00:29:15.816 "seek_data": false, 00:29:15.816 "copy": true, 00:29:15.816 "nvme_iov_md": false 00:29:15.816 }, 00:29:15.816 "memory_domains": [ 00:29:15.816 { 00:29:15.816 "dma_device_id": "system", 00:29:15.816 "dma_device_type": 1 00:29:15.816 }, 00:29:15.816 { 00:29:15.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:15.816 "dma_device_type": 2 00:29:15.816 } 00:29:15.816 ], 00:29:15.816 "driver_specific": {} 00:29:15.816 } 00:29:15.816 ] 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.816 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.075 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.075 "name": "Existed_Raid", 00:29:16.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.075 "strip_size_kb": 64, 00:29:16.075 "state": "configuring", 00:29:16.075 "raid_level": "raid5f", 00:29:16.075 "superblock": false, 00:29:16.075 "num_base_bdevs": 3, 00:29:16.075 "num_base_bdevs_discovered": 2, 00:29:16.075 "num_base_bdevs_operational": 3, 00:29:16.075 "base_bdevs_list": [ 00:29:16.075 { 00:29:16.075 "name": "BaseBdev1", 00:29:16.075 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:16.075 "is_configured": true, 00:29:16.075 "data_offset": 0, 00:29:16.075 "data_size": 65536 00:29:16.075 }, 00:29:16.075 { 00:29:16.075 "name": null, 00:29:16.075 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:16.075 "is_configured": false, 00:29:16.075 "data_offset": 0, 00:29:16.075 "data_size": 65536 00:29:16.075 }, 00:29:16.075 { 00:29:16.075 "name": "BaseBdev3", 00:29:16.075 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:16.075 "is_configured": true, 00:29:16.075 "data_offset": 0, 00:29:16.075 "data_size": 65536 00:29:16.075 } 00:29:16.075 ] 00:29:16.075 }' 00:29:16.075 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.075 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.334 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.334 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.334 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:16.334 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.334 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.593 [2024-11-20 07:26:40.659673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.593 "name": "Existed_Raid", 00:29:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.593 "strip_size_kb": 64, 00:29:16.593 "state": "configuring", 00:29:16.593 "raid_level": "raid5f", 00:29:16.593 "superblock": false, 00:29:16.593 "num_base_bdevs": 3, 00:29:16.593 "num_base_bdevs_discovered": 1, 00:29:16.593 "num_base_bdevs_operational": 3, 00:29:16.593 "base_bdevs_list": [ 00:29:16.593 { 00:29:16.593 "name": "BaseBdev1", 00:29:16.593 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:16.593 "is_configured": true, 00:29:16.593 "data_offset": 0, 00:29:16.593 "data_size": 65536 00:29:16.593 }, 00:29:16.593 { 00:29:16.593 "name": null, 00:29:16.593 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:16.593 "is_configured": false, 00:29:16.593 "data_offset": 0, 00:29:16.593 "data_size": 65536 00:29:16.593 }, 00:29:16.593 { 00:29:16.593 "name": null, 00:29:16.593 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:16.593 "is_configured": false, 00:29:16.593 "data_offset": 0, 00:29:16.593 "data_size": 65536 00:29:16.593 } 00:29:16.593 ] 00:29:16.593 }' 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.593 07:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.161 [2024-11-20 07:26:41.243982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.161 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.162 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.162 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.162 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.162 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.162 "name": "Existed_Raid", 00:29:17.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.162 "strip_size_kb": 64, 00:29:17.162 "state": "configuring", 00:29:17.162 "raid_level": "raid5f", 00:29:17.162 "superblock": false, 00:29:17.162 "num_base_bdevs": 3, 00:29:17.162 "num_base_bdevs_discovered": 2, 00:29:17.162 "num_base_bdevs_operational": 3, 00:29:17.162 "base_bdevs_list": [ 00:29:17.162 { 00:29:17.162 "name": "BaseBdev1", 00:29:17.162 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:17.162 "is_configured": true, 00:29:17.162 "data_offset": 0, 00:29:17.162 "data_size": 65536 00:29:17.162 }, 00:29:17.162 { 00:29:17.162 "name": null, 00:29:17.162 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:17.162 "is_configured": false, 00:29:17.162 "data_offset": 0, 00:29:17.162 "data_size": 65536 00:29:17.162 }, 00:29:17.162 { 00:29:17.162 "name": "BaseBdev3", 00:29:17.162 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:17.162 "is_configured": true, 00:29:17.162 "data_offset": 0, 00:29:17.162 "data_size": 65536 00:29:17.162 } 00:29:17.162 ] 00:29:17.162 }' 00:29:17.162 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.162 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.730 [2024-11-20 07:26:41.832117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.730 "name": "Existed_Raid", 00:29:17.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.730 "strip_size_kb": 64, 00:29:17.730 "state": "configuring", 00:29:17.730 "raid_level": "raid5f", 00:29:17.730 "superblock": false, 00:29:17.730 "num_base_bdevs": 3, 00:29:17.730 "num_base_bdevs_discovered": 1, 00:29:17.730 "num_base_bdevs_operational": 3, 00:29:17.730 "base_bdevs_list": [ 00:29:17.730 { 00:29:17.730 "name": null, 00:29:17.730 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:17.730 "is_configured": false, 00:29:17.730 "data_offset": 0, 00:29:17.730 "data_size": 65536 00:29:17.730 }, 00:29:17.730 { 00:29:17.730 "name": null, 00:29:17.730 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:17.730 "is_configured": false, 00:29:17.730 "data_offset": 0, 00:29:17.730 "data_size": 65536 00:29:17.730 }, 00:29:17.730 { 00:29:17.730 "name": "BaseBdev3", 00:29:17.730 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:17.730 "is_configured": true, 00:29:17.730 "data_offset": 0, 00:29:17.730 "data_size": 65536 00:29:17.730 } 00:29:17.730 ] 00:29:17.730 }' 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.730 07:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.298 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.298 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.298 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.298 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.299 [2024-11-20 07:26:42.494852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:18.299 "name": "Existed_Raid", 00:29:18.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.299 "strip_size_kb": 64, 00:29:18.299 "state": "configuring", 00:29:18.299 "raid_level": "raid5f", 00:29:18.299 "superblock": false, 00:29:18.299 "num_base_bdevs": 3, 00:29:18.299 "num_base_bdevs_discovered": 2, 00:29:18.299 "num_base_bdevs_operational": 3, 00:29:18.299 "base_bdevs_list": [ 00:29:18.299 { 00:29:18.299 "name": null, 00:29:18.299 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:18.299 "is_configured": false, 00:29:18.299 "data_offset": 0, 00:29:18.299 "data_size": 65536 00:29:18.299 }, 00:29:18.299 { 00:29:18.299 "name": "BaseBdev2", 00:29:18.299 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:18.299 "is_configured": true, 00:29:18.299 "data_offset": 0, 00:29:18.299 "data_size": 65536 00:29:18.299 }, 00:29:18.299 { 00:29:18.299 "name": "BaseBdev3", 00:29:18.299 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:18.299 "is_configured": true, 00:29:18.299 "data_offset": 0, 00:29:18.299 "data_size": 65536 00:29:18.299 } 00:29:18.299 ] 00:29:18.299 }' 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:18.299 07:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6a63090b-a8b8-4d35-90e1-bcee81b5c929 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.866 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.125 [2024-11-20 07:26:43.178427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:19.125 [2024-11-20 07:26:43.178472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:19.125 [2024-11-20 07:26:43.178485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:19.125 [2024-11-20 07:26:43.178838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:19.125 [2024-11-20 07:26:43.183216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:19.125 [2024-11-20 07:26:43.183239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:19.125 [2024-11-20 07:26:43.183557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.125 NewBaseBdev 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.125 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.125 [ 00:29:19.125 { 00:29:19.125 "name": "NewBaseBdev", 00:29:19.125 "aliases": [ 00:29:19.125 "6a63090b-a8b8-4d35-90e1-bcee81b5c929" 00:29:19.125 ], 00:29:19.125 "product_name": "Malloc disk", 00:29:19.125 "block_size": 512, 00:29:19.126 "num_blocks": 65536, 00:29:19.126 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:19.126 "assigned_rate_limits": { 00:29:19.126 "rw_ios_per_sec": 0, 00:29:19.126 "rw_mbytes_per_sec": 0, 00:29:19.126 "r_mbytes_per_sec": 0, 00:29:19.126 "w_mbytes_per_sec": 0 00:29:19.126 }, 00:29:19.126 "claimed": true, 00:29:19.126 "claim_type": "exclusive_write", 00:29:19.126 "zoned": false, 00:29:19.126 "supported_io_types": { 00:29:19.126 "read": true, 00:29:19.126 "write": true, 00:29:19.126 "unmap": true, 00:29:19.126 "flush": true, 00:29:19.126 "reset": true, 00:29:19.126 "nvme_admin": false, 00:29:19.126 "nvme_io": false, 00:29:19.126 "nvme_io_md": false, 00:29:19.126 "write_zeroes": true, 00:29:19.126 "zcopy": true, 00:29:19.126 "get_zone_info": false, 00:29:19.126 "zone_management": false, 00:29:19.126 "zone_append": false, 00:29:19.126 "compare": false, 00:29:19.126 "compare_and_write": false, 00:29:19.126 "abort": true, 00:29:19.126 "seek_hole": false, 00:29:19.126 "seek_data": false, 00:29:19.126 "copy": true, 00:29:19.126 "nvme_iov_md": false 00:29:19.126 }, 00:29:19.126 "memory_domains": [ 00:29:19.126 { 00:29:19.126 "dma_device_id": "system", 00:29:19.126 "dma_device_type": 1 00:29:19.126 }, 00:29:19.126 { 00:29:19.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.126 "dma_device_type": 2 00:29:19.126 } 00:29:19.126 ], 00:29:19.126 "driver_specific": {} 00:29:19.126 } 00:29:19.126 ] 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:19.126 "name": "Existed_Raid", 00:29:19.126 "uuid": "1066bdf9-753c-43d2-b848-084a8380c3ba", 00:29:19.126 "strip_size_kb": 64, 00:29:19.126 "state": "online", 00:29:19.126 "raid_level": "raid5f", 00:29:19.126 "superblock": false, 00:29:19.126 "num_base_bdevs": 3, 00:29:19.126 "num_base_bdevs_discovered": 3, 00:29:19.126 "num_base_bdevs_operational": 3, 00:29:19.126 "base_bdevs_list": [ 00:29:19.126 { 00:29:19.126 "name": "NewBaseBdev", 00:29:19.126 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:19.126 "is_configured": true, 00:29:19.126 "data_offset": 0, 00:29:19.126 "data_size": 65536 00:29:19.126 }, 00:29:19.126 { 00:29:19.126 "name": "BaseBdev2", 00:29:19.126 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:19.126 "is_configured": true, 00:29:19.126 "data_offset": 0, 00:29:19.126 "data_size": 65536 00:29:19.126 }, 00:29:19.126 { 00:29:19.126 "name": "BaseBdev3", 00:29:19.126 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:19.126 "is_configured": true, 00:29:19.126 "data_offset": 0, 00:29:19.126 "data_size": 65536 00:29:19.126 } 00:29:19.126 ] 00:29:19.126 }' 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:19.126 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:19.695 [2024-11-20 07:26:43.745814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.695 "name": "Existed_Raid", 00:29:19.695 "aliases": [ 00:29:19.695 "1066bdf9-753c-43d2-b848-084a8380c3ba" 00:29:19.695 ], 00:29:19.695 "product_name": "Raid Volume", 00:29:19.695 "block_size": 512, 00:29:19.695 "num_blocks": 131072, 00:29:19.695 "uuid": "1066bdf9-753c-43d2-b848-084a8380c3ba", 00:29:19.695 "assigned_rate_limits": { 00:29:19.695 "rw_ios_per_sec": 0, 00:29:19.695 "rw_mbytes_per_sec": 0, 00:29:19.695 "r_mbytes_per_sec": 0, 00:29:19.695 "w_mbytes_per_sec": 0 00:29:19.695 }, 00:29:19.695 "claimed": false, 00:29:19.695 "zoned": false, 00:29:19.695 "supported_io_types": { 00:29:19.695 "read": true, 00:29:19.695 "write": true, 00:29:19.695 "unmap": false, 00:29:19.695 "flush": false, 00:29:19.695 "reset": true, 00:29:19.695 "nvme_admin": false, 00:29:19.695 "nvme_io": false, 00:29:19.695 "nvme_io_md": false, 00:29:19.695 "write_zeroes": true, 00:29:19.695 "zcopy": false, 00:29:19.695 "get_zone_info": false, 00:29:19.695 "zone_management": false, 00:29:19.695 "zone_append": false, 00:29:19.695 "compare": false, 00:29:19.695 "compare_and_write": false, 00:29:19.695 "abort": false, 00:29:19.695 "seek_hole": false, 00:29:19.695 "seek_data": false, 00:29:19.695 "copy": false, 00:29:19.695 "nvme_iov_md": false 00:29:19.695 }, 00:29:19.695 "driver_specific": { 00:29:19.695 "raid": { 00:29:19.695 "uuid": "1066bdf9-753c-43d2-b848-084a8380c3ba", 00:29:19.695 "strip_size_kb": 64, 00:29:19.695 "state": "online", 00:29:19.695 "raid_level": "raid5f", 00:29:19.695 "superblock": false, 00:29:19.695 "num_base_bdevs": 3, 00:29:19.695 "num_base_bdevs_discovered": 3, 00:29:19.695 "num_base_bdevs_operational": 3, 00:29:19.695 "base_bdevs_list": [ 00:29:19.695 { 00:29:19.695 "name": "NewBaseBdev", 00:29:19.695 "uuid": "6a63090b-a8b8-4d35-90e1-bcee81b5c929", 00:29:19.695 "is_configured": true, 00:29:19.695 "data_offset": 0, 00:29:19.695 "data_size": 65536 00:29:19.695 }, 00:29:19.695 { 00:29:19.695 "name": "BaseBdev2", 00:29:19.695 "uuid": "ee131518-3fc4-471a-a3a3-04bb44f9b3b5", 00:29:19.695 "is_configured": true, 00:29:19.695 "data_offset": 0, 00:29:19.695 "data_size": 65536 00:29:19.695 }, 00:29:19.695 { 00:29:19.695 "name": "BaseBdev3", 00:29:19.695 "uuid": "d4a6e2e1-47df-4d62-bfd0-fa53024114c8", 00:29:19.695 "is_configured": true, 00:29:19.695 "data_offset": 0, 00:29:19.695 "data_size": 65536 00:29:19.695 } 00:29:19.695 ] 00:29:19.695 } 00:29:19.695 } 00:29:19.695 }' 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:19.695 BaseBdev2 00:29:19.695 BaseBdev3' 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.695 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.696 07:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.955 [2024-11-20 07:26:44.069542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:19.955 [2024-11-20 07:26:44.069569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:19.955 [2024-11-20 07:26:44.069692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:19.955 [2024-11-20 07:26:44.070073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:19.955 [2024-11-20 07:26:44.070092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80423 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80423 ']' 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80423 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80423 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80423' 00:29:19.955 killing process with pid 80423 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80423 00:29:19.955 [2024-11-20 07:26:44.111853] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:19.955 07:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80423 00:29:20.214 [2024-11-20 07:26:44.343010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:21.151 07:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:21.151 00:29:21.151 real 0m11.711s 00:29:21.152 user 0m19.686s 00:29:21.152 sys 0m1.618s 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.152 ************************************ 00:29:21.152 END TEST raid5f_state_function_test 00:29:21.152 ************************************ 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.152 07:26:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:29:21.152 07:26:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:21.152 07:26:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.152 07:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:21.152 ************************************ 00:29:21.152 START TEST raid5f_state_function_test_sb 00:29:21.152 ************************************ 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:21.152 Process raid pid: 81056 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81056 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81056' 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81056 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81056 ']' 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.152 07:26:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.152 [2024-11-20 07:26:45.415506] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:21.152 [2024-11-20 07:26:45.416035] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.411 [2024-11-20 07:26:45.607623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.670 [2024-11-20 07:26:45.716290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.670 [2024-11-20 07:26:45.899002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:21.670 [2024-11-20 07:26:45.899270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.270 [2024-11-20 07:26:46.344081] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:22.270 [2024-11-20 07:26:46.344151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:22.270 [2024-11-20 07:26:46.344167] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:22.270 [2024-11-20 07:26:46.344181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:22.270 [2024-11-20 07:26:46.344190] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:22.270 [2024-11-20 07:26:46.344203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.270 "name": "Existed_Raid", 00:29:22.270 "uuid": "72d0c701-479b-4641-b2ed-f147b2802156", 00:29:22.270 "strip_size_kb": 64, 00:29:22.270 "state": "configuring", 00:29:22.270 "raid_level": "raid5f", 00:29:22.270 "superblock": true, 00:29:22.270 "num_base_bdevs": 3, 00:29:22.270 "num_base_bdevs_discovered": 0, 00:29:22.270 "num_base_bdevs_operational": 3, 00:29:22.270 "base_bdevs_list": [ 00:29:22.270 { 00:29:22.270 "name": "BaseBdev1", 00:29:22.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.270 "is_configured": false, 00:29:22.270 "data_offset": 0, 00:29:22.270 "data_size": 0 00:29:22.270 }, 00:29:22.270 { 00:29:22.270 "name": "BaseBdev2", 00:29:22.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.270 "is_configured": false, 00:29:22.270 "data_offset": 0, 00:29:22.270 "data_size": 0 00:29:22.270 }, 00:29:22.270 { 00:29:22.270 "name": "BaseBdev3", 00:29:22.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.270 "is_configured": false, 00:29:22.270 "data_offset": 0, 00:29:22.270 "data_size": 0 00:29:22.270 } 00:29:22.270 ] 00:29:22.270 }' 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.270 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.838 [2024-11-20 07:26:46.868169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:22.838 [2024-11-20 07:26:46.868363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.838 [2024-11-20 07:26:46.880177] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:22.838 [2024-11-20 07:26:46.880361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:22.838 [2024-11-20 07:26:46.880488] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:22.838 [2024-11-20 07:26:46.880546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:22.838 [2024-11-20 07:26:46.880813] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:22.838 [2024-11-20 07:26:46.880876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.838 [2024-11-20 07:26:46.921975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.838 BaseBdev1 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.838 [ 00:29:22.838 { 00:29:22.838 "name": "BaseBdev1", 00:29:22.838 "aliases": [ 00:29:22.838 "3d10bad5-b077-43b4-b76c-9bb32f5a0227" 00:29:22.838 ], 00:29:22.838 "product_name": "Malloc disk", 00:29:22.838 "block_size": 512, 00:29:22.838 "num_blocks": 65536, 00:29:22.838 "uuid": "3d10bad5-b077-43b4-b76c-9bb32f5a0227", 00:29:22.838 "assigned_rate_limits": { 00:29:22.838 "rw_ios_per_sec": 0, 00:29:22.838 "rw_mbytes_per_sec": 0, 00:29:22.838 "r_mbytes_per_sec": 0, 00:29:22.838 "w_mbytes_per_sec": 0 00:29:22.838 }, 00:29:22.838 "claimed": true, 00:29:22.838 "claim_type": "exclusive_write", 00:29:22.838 "zoned": false, 00:29:22.838 "supported_io_types": { 00:29:22.838 "read": true, 00:29:22.838 "write": true, 00:29:22.838 "unmap": true, 00:29:22.838 "flush": true, 00:29:22.838 "reset": true, 00:29:22.838 "nvme_admin": false, 00:29:22.838 "nvme_io": false, 00:29:22.838 "nvme_io_md": false, 00:29:22.838 "write_zeroes": true, 00:29:22.838 "zcopy": true, 00:29:22.838 "get_zone_info": false, 00:29:22.838 "zone_management": false, 00:29:22.838 "zone_append": false, 00:29:22.838 "compare": false, 00:29:22.838 "compare_and_write": false, 00:29:22.838 "abort": true, 00:29:22.838 "seek_hole": false, 00:29:22.838 "seek_data": false, 00:29:22.838 "copy": true, 00:29:22.838 "nvme_iov_md": false 00:29:22.838 }, 00:29:22.838 "memory_domains": [ 00:29:22.838 { 00:29:22.838 "dma_device_id": "system", 00:29:22.838 "dma_device_type": 1 00:29:22.838 }, 00:29:22.838 { 00:29:22.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:22.838 "dma_device_type": 2 00:29:22.838 } 00:29:22.838 ], 00:29:22.838 "driver_specific": {} 00:29:22.838 } 00:29:22.838 ] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.838 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.839 07:26:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.839 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.839 "name": "Existed_Raid", 00:29:22.839 "uuid": "2f9cb17e-545c-4e3f-ba2a-aef5d1a2640d", 00:29:22.839 "strip_size_kb": 64, 00:29:22.839 "state": "configuring", 00:29:22.839 "raid_level": "raid5f", 00:29:22.839 "superblock": true, 00:29:22.839 "num_base_bdevs": 3, 00:29:22.839 "num_base_bdevs_discovered": 1, 00:29:22.839 "num_base_bdevs_operational": 3, 00:29:22.839 "base_bdevs_list": [ 00:29:22.839 { 00:29:22.839 "name": "BaseBdev1", 00:29:22.839 "uuid": "3d10bad5-b077-43b4-b76c-9bb32f5a0227", 00:29:22.839 "is_configured": true, 00:29:22.839 "data_offset": 2048, 00:29:22.839 "data_size": 63488 00:29:22.839 }, 00:29:22.839 { 00:29:22.839 "name": "BaseBdev2", 00:29:22.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.839 "is_configured": false, 00:29:22.839 "data_offset": 0, 00:29:22.839 "data_size": 0 00:29:22.839 }, 00:29:22.839 { 00:29:22.839 "name": "BaseBdev3", 00:29:22.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.839 "is_configured": false, 00:29:22.839 "data_offset": 0, 00:29:22.839 "data_size": 0 00:29:22.839 } 00:29:22.839 ] 00:29:22.839 }' 00:29:22.839 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.839 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.407 [2024-11-20 07:26:47.478231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:23.407 [2024-11-20 07:26:47.478287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.407 [2024-11-20 07:26:47.490272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:23.407 [2024-11-20 07:26:47.492554] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:23.407 [2024-11-20 07:26:47.492663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:23.407 [2024-11-20 07:26:47.492680] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:23.407 [2024-11-20 07:26:47.492696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.407 "name": "Existed_Raid", 00:29:23.407 "uuid": "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb", 00:29:23.407 "strip_size_kb": 64, 00:29:23.407 "state": "configuring", 00:29:23.407 "raid_level": "raid5f", 00:29:23.407 "superblock": true, 00:29:23.407 "num_base_bdevs": 3, 00:29:23.407 "num_base_bdevs_discovered": 1, 00:29:23.407 "num_base_bdevs_operational": 3, 00:29:23.407 "base_bdevs_list": [ 00:29:23.407 { 00:29:23.407 "name": "BaseBdev1", 00:29:23.407 "uuid": "3d10bad5-b077-43b4-b76c-9bb32f5a0227", 00:29:23.407 "is_configured": true, 00:29:23.407 "data_offset": 2048, 00:29:23.407 "data_size": 63488 00:29:23.407 }, 00:29:23.407 { 00:29:23.407 "name": "BaseBdev2", 00:29:23.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.407 "is_configured": false, 00:29:23.407 "data_offset": 0, 00:29:23.407 "data_size": 0 00:29:23.407 }, 00:29:23.407 { 00:29:23.407 "name": "BaseBdev3", 00:29:23.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.407 "is_configured": false, 00:29:23.407 "data_offset": 0, 00:29:23.407 "data_size": 0 00:29:23.407 } 00:29:23.407 ] 00:29:23.407 }' 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.407 07:26:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.975 [2024-11-20 07:26:48.051122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:23.975 BaseBdev2 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.975 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.975 [ 00:29:23.975 { 00:29:23.975 "name": "BaseBdev2", 00:29:23.975 "aliases": [ 00:29:23.975 "621db301-0aad-4269-8745-d8e03e7f32cd" 00:29:23.975 ], 00:29:23.975 "product_name": "Malloc disk", 00:29:23.975 "block_size": 512, 00:29:23.975 "num_blocks": 65536, 00:29:23.975 "uuid": "621db301-0aad-4269-8745-d8e03e7f32cd", 00:29:23.975 "assigned_rate_limits": { 00:29:23.975 "rw_ios_per_sec": 0, 00:29:23.975 "rw_mbytes_per_sec": 0, 00:29:23.975 "r_mbytes_per_sec": 0, 00:29:23.975 "w_mbytes_per_sec": 0 00:29:23.975 }, 00:29:23.975 "claimed": true, 00:29:23.975 "claim_type": "exclusive_write", 00:29:23.975 "zoned": false, 00:29:23.975 "supported_io_types": { 00:29:23.975 "read": true, 00:29:23.975 "write": true, 00:29:23.975 "unmap": true, 00:29:23.975 "flush": true, 00:29:23.975 "reset": true, 00:29:23.975 "nvme_admin": false, 00:29:23.975 "nvme_io": false, 00:29:23.975 "nvme_io_md": false, 00:29:23.975 "write_zeroes": true, 00:29:23.975 "zcopy": true, 00:29:23.975 "get_zone_info": false, 00:29:23.975 "zone_management": false, 00:29:23.975 "zone_append": false, 00:29:23.975 "compare": false, 00:29:23.975 "compare_and_write": false, 00:29:23.975 "abort": true, 00:29:23.975 "seek_hole": false, 00:29:23.975 "seek_data": false, 00:29:23.975 "copy": true, 00:29:23.975 "nvme_iov_md": false 00:29:23.975 }, 00:29:23.976 "memory_domains": [ 00:29:23.976 { 00:29:23.976 "dma_device_id": "system", 00:29:23.976 "dma_device_type": 1 00:29:23.976 }, 00:29:23.976 { 00:29:23.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.976 "dma_device_type": 2 00:29:23.976 } 00:29:23.976 ], 00:29:23.976 "driver_specific": {} 00:29:23.976 } 00:29:23.976 ] 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.976 "name": "Existed_Raid", 00:29:23.976 "uuid": "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb", 00:29:23.976 "strip_size_kb": 64, 00:29:23.976 "state": "configuring", 00:29:23.976 "raid_level": "raid5f", 00:29:23.976 "superblock": true, 00:29:23.976 "num_base_bdevs": 3, 00:29:23.976 "num_base_bdevs_discovered": 2, 00:29:23.976 "num_base_bdevs_operational": 3, 00:29:23.976 "base_bdevs_list": [ 00:29:23.976 { 00:29:23.976 "name": "BaseBdev1", 00:29:23.976 "uuid": "3d10bad5-b077-43b4-b76c-9bb32f5a0227", 00:29:23.976 "is_configured": true, 00:29:23.976 "data_offset": 2048, 00:29:23.976 "data_size": 63488 00:29:23.976 }, 00:29:23.976 { 00:29:23.976 "name": "BaseBdev2", 00:29:23.976 "uuid": "621db301-0aad-4269-8745-d8e03e7f32cd", 00:29:23.976 "is_configured": true, 00:29:23.976 "data_offset": 2048, 00:29:23.976 "data_size": 63488 00:29:23.976 }, 00:29:23.976 { 00:29:23.976 "name": "BaseBdev3", 00:29:23.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.976 "is_configured": false, 00:29:23.976 "data_offset": 0, 00:29:23.976 "data_size": 0 00:29:23.976 } 00:29:23.976 ] 00:29:23.976 }' 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.976 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.544 [2024-11-20 07:26:48.660533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:24.544 [2024-11-20 07:26:48.660880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:24.544 [2024-11-20 07:26:48.660910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:24.544 BaseBdev3 00:29:24.544 [2024-11-20 07:26:48.661316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.544 [2024-11-20 07:26:48.666226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:24.544 [2024-11-20 07:26:48.666249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:24.544 [2024-11-20 07:26:48.666565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.544 [ 00:29:24.544 { 00:29:24.544 "name": "BaseBdev3", 00:29:24.544 "aliases": [ 00:29:24.544 "caccf2e5-30f5-4def-b6d0-ec7c267ec8e6" 00:29:24.544 ], 00:29:24.544 "product_name": "Malloc disk", 00:29:24.544 "block_size": 512, 00:29:24.544 "num_blocks": 65536, 00:29:24.544 "uuid": "caccf2e5-30f5-4def-b6d0-ec7c267ec8e6", 00:29:24.544 "assigned_rate_limits": { 00:29:24.544 "rw_ios_per_sec": 0, 00:29:24.544 "rw_mbytes_per_sec": 0, 00:29:24.544 "r_mbytes_per_sec": 0, 00:29:24.544 "w_mbytes_per_sec": 0 00:29:24.544 }, 00:29:24.544 "claimed": true, 00:29:24.544 "claim_type": "exclusive_write", 00:29:24.544 "zoned": false, 00:29:24.544 "supported_io_types": { 00:29:24.544 "read": true, 00:29:24.544 "write": true, 00:29:24.544 "unmap": true, 00:29:24.544 "flush": true, 00:29:24.544 "reset": true, 00:29:24.544 "nvme_admin": false, 00:29:24.544 "nvme_io": false, 00:29:24.544 "nvme_io_md": false, 00:29:24.544 "write_zeroes": true, 00:29:24.544 "zcopy": true, 00:29:24.544 "get_zone_info": false, 00:29:24.544 "zone_management": false, 00:29:24.544 "zone_append": false, 00:29:24.544 "compare": false, 00:29:24.544 "compare_and_write": false, 00:29:24.544 "abort": true, 00:29:24.544 "seek_hole": false, 00:29:24.544 "seek_data": false, 00:29:24.544 "copy": true, 00:29:24.544 "nvme_iov_md": false 00:29:24.544 }, 00:29:24.544 "memory_domains": [ 00:29:24.544 { 00:29:24.544 "dma_device_id": "system", 00:29:24.544 "dma_device_type": 1 00:29:24.544 }, 00:29:24.544 { 00:29:24.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.544 "dma_device_type": 2 00:29:24.544 } 00:29:24.544 ], 00:29:24.544 "driver_specific": {} 00:29:24.544 } 00:29:24.544 ] 00:29:24.544 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:24.545 "name": "Existed_Raid", 00:29:24.545 "uuid": "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb", 00:29:24.545 "strip_size_kb": 64, 00:29:24.545 "state": "online", 00:29:24.545 "raid_level": "raid5f", 00:29:24.545 "superblock": true, 00:29:24.545 "num_base_bdevs": 3, 00:29:24.545 "num_base_bdevs_discovered": 3, 00:29:24.545 "num_base_bdevs_operational": 3, 00:29:24.545 "base_bdevs_list": [ 00:29:24.545 { 00:29:24.545 "name": "BaseBdev1", 00:29:24.545 "uuid": "3d10bad5-b077-43b4-b76c-9bb32f5a0227", 00:29:24.545 "is_configured": true, 00:29:24.545 "data_offset": 2048, 00:29:24.545 "data_size": 63488 00:29:24.545 }, 00:29:24.545 { 00:29:24.545 "name": "BaseBdev2", 00:29:24.545 "uuid": "621db301-0aad-4269-8745-d8e03e7f32cd", 00:29:24.545 "is_configured": true, 00:29:24.545 "data_offset": 2048, 00:29:24.545 "data_size": 63488 00:29:24.545 }, 00:29:24.545 { 00:29:24.545 "name": "BaseBdev3", 00:29:24.545 "uuid": "caccf2e5-30f5-4def-b6d0-ec7c267ec8e6", 00:29:24.545 "is_configured": true, 00:29:24.545 "data_offset": 2048, 00:29:24.545 "data_size": 63488 00:29:24.545 } 00:29:24.545 ] 00:29:24.545 }' 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:24.545 07:26:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:25.114 [2024-11-20 07:26:49.224253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.114 "name": "Existed_Raid", 00:29:25.114 "aliases": [ 00:29:25.114 "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb" 00:29:25.114 ], 00:29:25.114 "product_name": "Raid Volume", 00:29:25.114 "block_size": 512, 00:29:25.114 "num_blocks": 126976, 00:29:25.114 "uuid": "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb", 00:29:25.114 "assigned_rate_limits": { 00:29:25.114 "rw_ios_per_sec": 0, 00:29:25.114 "rw_mbytes_per_sec": 0, 00:29:25.114 "r_mbytes_per_sec": 0, 00:29:25.114 "w_mbytes_per_sec": 0 00:29:25.114 }, 00:29:25.114 "claimed": false, 00:29:25.114 "zoned": false, 00:29:25.114 "supported_io_types": { 00:29:25.114 "read": true, 00:29:25.114 "write": true, 00:29:25.114 "unmap": false, 00:29:25.114 "flush": false, 00:29:25.114 "reset": true, 00:29:25.114 "nvme_admin": false, 00:29:25.114 "nvme_io": false, 00:29:25.114 "nvme_io_md": false, 00:29:25.114 "write_zeroes": true, 00:29:25.114 "zcopy": false, 00:29:25.114 "get_zone_info": false, 00:29:25.114 "zone_management": false, 00:29:25.114 "zone_append": false, 00:29:25.114 "compare": false, 00:29:25.114 "compare_and_write": false, 00:29:25.114 "abort": false, 00:29:25.114 "seek_hole": false, 00:29:25.114 "seek_data": false, 00:29:25.114 "copy": false, 00:29:25.114 "nvme_iov_md": false 00:29:25.114 }, 00:29:25.114 "driver_specific": { 00:29:25.114 "raid": { 00:29:25.114 "uuid": "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb", 00:29:25.114 "strip_size_kb": 64, 00:29:25.114 "state": "online", 00:29:25.114 "raid_level": "raid5f", 00:29:25.114 "superblock": true, 00:29:25.114 "num_base_bdevs": 3, 00:29:25.114 "num_base_bdevs_discovered": 3, 00:29:25.114 "num_base_bdevs_operational": 3, 00:29:25.114 "base_bdevs_list": [ 00:29:25.114 { 00:29:25.114 "name": "BaseBdev1", 00:29:25.114 "uuid": "3d10bad5-b077-43b4-b76c-9bb32f5a0227", 00:29:25.114 "is_configured": true, 00:29:25.114 "data_offset": 2048, 00:29:25.114 "data_size": 63488 00:29:25.114 }, 00:29:25.114 { 00:29:25.114 "name": "BaseBdev2", 00:29:25.114 "uuid": "621db301-0aad-4269-8745-d8e03e7f32cd", 00:29:25.114 "is_configured": true, 00:29:25.114 "data_offset": 2048, 00:29:25.114 "data_size": 63488 00:29:25.114 }, 00:29:25.114 { 00:29:25.114 "name": "BaseBdev3", 00:29:25.114 "uuid": "caccf2e5-30f5-4def-b6d0-ec7c267ec8e6", 00:29:25.114 "is_configured": true, 00:29:25.114 "data_offset": 2048, 00:29:25.114 "data_size": 63488 00:29:25.114 } 00:29:25.114 ] 00:29:25.114 } 00:29:25.114 } 00:29:25.114 }' 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:25.114 BaseBdev2 00:29:25.114 BaseBdev3' 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.114 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.373 [2024-11-20 07:26:49.540165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.373 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.631 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:25.631 "name": "Existed_Raid", 00:29:25.631 "uuid": "a4bc8c0a-91b6-4bc7-915a-1fa0a6b2eadb", 00:29:25.631 "strip_size_kb": 64, 00:29:25.631 "state": "online", 00:29:25.631 "raid_level": "raid5f", 00:29:25.631 "superblock": true, 00:29:25.631 "num_base_bdevs": 3, 00:29:25.631 "num_base_bdevs_discovered": 2, 00:29:25.631 "num_base_bdevs_operational": 2, 00:29:25.631 "base_bdevs_list": [ 00:29:25.631 { 00:29:25.631 "name": null, 00:29:25.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.631 "is_configured": false, 00:29:25.631 "data_offset": 0, 00:29:25.631 "data_size": 63488 00:29:25.631 }, 00:29:25.631 { 00:29:25.631 "name": "BaseBdev2", 00:29:25.632 "uuid": "621db301-0aad-4269-8745-d8e03e7f32cd", 00:29:25.632 "is_configured": true, 00:29:25.632 "data_offset": 2048, 00:29:25.632 "data_size": 63488 00:29:25.632 }, 00:29:25.632 { 00:29:25.632 "name": "BaseBdev3", 00:29:25.632 "uuid": "caccf2e5-30f5-4def-b6d0-ec7c267ec8e6", 00:29:25.632 "is_configured": true, 00:29:25.632 "data_offset": 2048, 00:29:25.632 "data_size": 63488 00:29:25.632 } 00:29:25.632 ] 00:29:25.632 }' 00:29:25.632 07:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:25.632 07:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.890 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.152 [2024-11-20 07:26:50.183655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:26.152 [2024-11-20 07:26:50.183841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:26.152 [2024-11-20 07:26:50.253539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.152 [2024-11-20 07:26:50.317589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:26.152 [2024-11-20 07:26:50.317670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.152 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 BaseBdev2 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 [ 00:29:26.415 { 00:29:26.415 "name": "BaseBdev2", 00:29:26.415 "aliases": [ 00:29:26.415 "4d19ef7b-4851-4f9c-afdd-fa2374499a79" 00:29:26.415 ], 00:29:26.415 "product_name": "Malloc disk", 00:29:26.415 "block_size": 512, 00:29:26.415 "num_blocks": 65536, 00:29:26.415 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:26.415 "assigned_rate_limits": { 00:29:26.415 "rw_ios_per_sec": 0, 00:29:26.415 "rw_mbytes_per_sec": 0, 00:29:26.415 "r_mbytes_per_sec": 0, 00:29:26.415 "w_mbytes_per_sec": 0 00:29:26.415 }, 00:29:26.415 "claimed": false, 00:29:26.415 "zoned": false, 00:29:26.415 "supported_io_types": { 00:29:26.415 "read": true, 00:29:26.415 "write": true, 00:29:26.415 "unmap": true, 00:29:26.415 "flush": true, 00:29:26.415 "reset": true, 00:29:26.415 "nvme_admin": false, 00:29:26.415 "nvme_io": false, 00:29:26.415 "nvme_io_md": false, 00:29:26.415 "write_zeroes": true, 00:29:26.415 "zcopy": true, 00:29:26.415 "get_zone_info": false, 00:29:26.415 "zone_management": false, 00:29:26.415 "zone_append": false, 00:29:26.415 "compare": false, 00:29:26.415 "compare_and_write": false, 00:29:26.415 "abort": true, 00:29:26.415 "seek_hole": false, 00:29:26.415 "seek_data": false, 00:29:26.415 "copy": true, 00:29:26.415 "nvme_iov_md": false 00:29:26.415 }, 00:29:26.415 "memory_domains": [ 00:29:26.415 { 00:29:26.415 "dma_device_id": "system", 00:29:26.415 "dma_device_type": 1 00:29:26.415 }, 00:29:26.415 { 00:29:26.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:26.415 "dma_device_type": 2 00:29:26.415 } 00:29:26.415 ], 00:29:26.415 "driver_specific": {} 00:29:26.415 } 00:29:26.415 ] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 BaseBdev3 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 [ 00:29:26.415 { 00:29:26.415 "name": "BaseBdev3", 00:29:26.415 "aliases": [ 00:29:26.415 "6e0fc51d-3153-45f9-88ee-d268c98552be" 00:29:26.415 ], 00:29:26.415 "product_name": "Malloc disk", 00:29:26.415 "block_size": 512, 00:29:26.415 "num_blocks": 65536, 00:29:26.415 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:26.415 "assigned_rate_limits": { 00:29:26.415 "rw_ios_per_sec": 0, 00:29:26.415 "rw_mbytes_per_sec": 0, 00:29:26.415 "r_mbytes_per_sec": 0, 00:29:26.415 "w_mbytes_per_sec": 0 00:29:26.415 }, 00:29:26.415 "claimed": false, 00:29:26.415 "zoned": false, 00:29:26.415 "supported_io_types": { 00:29:26.415 "read": true, 00:29:26.415 "write": true, 00:29:26.415 "unmap": true, 00:29:26.415 "flush": true, 00:29:26.415 "reset": true, 00:29:26.415 "nvme_admin": false, 00:29:26.415 "nvme_io": false, 00:29:26.415 "nvme_io_md": false, 00:29:26.415 "write_zeroes": true, 00:29:26.415 "zcopy": true, 00:29:26.415 "get_zone_info": false, 00:29:26.415 "zone_management": false, 00:29:26.415 "zone_append": false, 00:29:26.415 "compare": false, 00:29:26.415 "compare_and_write": false, 00:29:26.415 "abort": true, 00:29:26.415 "seek_hole": false, 00:29:26.415 "seek_data": false, 00:29:26.415 "copy": true, 00:29:26.415 "nvme_iov_md": false 00:29:26.415 }, 00:29:26.415 "memory_domains": [ 00:29:26.415 { 00:29:26.415 "dma_device_id": "system", 00:29:26.415 "dma_device_type": 1 00:29:26.415 }, 00:29:26.415 { 00:29:26.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:26.415 "dma_device_type": 2 00:29:26.415 } 00:29:26.415 ], 00:29:26.415 "driver_specific": {} 00:29:26.415 } 00:29:26.415 ] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 [2024-11-20 07:26:50.598120] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:26.415 [2024-11-20 07:26:50.598311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:26.415 [2024-11-20 07:26:50.598437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:26.415 [2024-11-20 07:26:50.600883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:26.415 "name": "Existed_Raid", 00:29:26.415 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:26.415 "strip_size_kb": 64, 00:29:26.415 "state": "configuring", 00:29:26.415 "raid_level": "raid5f", 00:29:26.415 "superblock": true, 00:29:26.415 "num_base_bdevs": 3, 00:29:26.415 "num_base_bdevs_discovered": 2, 00:29:26.415 "num_base_bdevs_operational": 3, 00:29:26.415 "base_bdevs_list": [ 00:29:26.415 { 00:29:26.415 "name": "BaseBdev1", 00:29:26.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.415 "is_configured": false, 00:29:26.415 "data_offset": 0, 00:29:26.415 "data_size": 0 00:29:26.415 }, 00:29:26.415 { 00:29:26.415 "name": "BaseBdev2", 00:29:26.415 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:26.415 "is_configured": true, 00:29:26.415 "data_offset": 2048, 00:29:26.415 "data_size": 63488 00:29:26.415 }, 00:29:26.415 { 00:29:26.415 "name": "BaseBdev3", 00:29:26.415 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:26.415 "is_configured": true, 00:29:26.415 "data_offset": 2048, 00:29:26.415 "data_size": 63488 00:29:26.415 } 00:29:26.415 ] 00:29:26.415 }' 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:26.415 07:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.983 [2024-11-20 07:26:51.138356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:26.983 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.984 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:26.984 "name": "Existed_Raid", 00:29:26.984 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:26.984 "strip_size_kb": 64, 00:29:26.984 "state": "configuring", 00:29:26.984 "raid_level": "raid5f", 00:29:26.984 "superblock": true, 00:29:26.984 "num_base_bdevs": 3, 00:29:26.984 "num_base_bdevs_discovered": 1, 00:29:26.984 "num_base_bdevs_operational": 3, 00:29:26.984 "base_bdevs_list": [ 00:29:26.984 { 00:29:26.984 "name": "BaseBdev1", 00:29:26.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.984 "is_configured": false, 00:29:26.984 "data_offset": 0, 00:29:26.984 "data_size": 0 00:29:26.984 }, 00:29:26.984 { 00:29:26.984 "name": null, 00:29:26.984 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:26.984 "is_configured": false, 00:29:26.984 "data_offset": 0, 00:29:26.984 "data_size": 63488 00:29:26.984 }, 00:29:26.984 { 00:29:26.984 "name": "BaseBdev3", 00:29:26.984 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:26.984 "is_configured": true, 00:29:26.984 "data_offset": 2048, 00:29:26.984 "data_size": 63488 00:29:26.984 } 00:29:26.984 ] 00:29:26.984 }' 00:29:26.984 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:26.984 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 [2024-11-20 07:26:51.748074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:27.551 BaseBdev1 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 [ 00:29:27.551 { 00:29:27.551 "name": "BaseBdev1", 00:29:27.551 "aliases": [ 00:29:27.551 "4aafa1bb-7e43-4b53-9167-a70ea457f159" 00:29:27.551 ], 00:29:27.551 "product_name": "Malloc disk", 00:29:27.551 "block_size": 512, 00:29:27.551 "num_blocks": 65536, 00:29:27.551 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:27.551 "assigned_rate_limits": { 00:29:27.551 "rw_ios_per_sec": 0, 00:29:27.551 "rw_mbytes_per_sec": 0, 00:29:27.551 "r_mbytes_per_sec": 0, 00:29:27.551 "w_mbytes_per_sec": 0 00:29:27.551 }, 00:29:27.551 "claimed": true, 00:29:27.551 "claim_type": "exclusive_write", 00:29:27.551 "zoned": false, 00:29:27.551 "supported_io_types": { 00:29:27.551 "read": true, 00:29:27.551 "write": true, 00:29:27.551 "unmap": true, 00:29:27.551 "flush": true, 00:29:27.551 "reset": true, 00:29:27.551 "nvme_admin": false, 00:29:27.551 "nvme_io": false, 00:29:27.551 "nvme_io_md": false, 00:29:27.551 "write_zeroes": true, 00:29:27.551 "zcopy": true, 00:29:27.551 "get_zone_info": false, 00:29:27.551 "zone_management": false, 00:29:27.551 "zone_append": false, 00:29:27.551 "compare": false, 00:29:27.551 "compare_and_write": false, 00:29:27.551 "abort": true, 00:29:27.551 "seek_hole": false, 00:29:27.551 "seek_data": false, 00:29:27.551 "copy": true, 00:29:27.551 "nvme_iov_md": false 00:29:27.551 }, 00:29:27.551 "memory_domains": [ 00:29:27.551 { 00:29:27.551 "dma_device_id": "system", 00:29:27.551 "dma_device_type": 1 00:29:27.551 }, 00:29:27.551 { 00:29:27.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:27.551 "dma_device_type": 2 00:29:27.551 } 00:29:27.551 ], 00:29:27.551 "driver_specific": {} 00:29:27.551 } 00:29:27.551 ] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.551 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.809 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.809 "name": "Existed_Raid", 00:29:27.809 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:27.809 "strip_size_kb": 64, 00:29:27.809 "state": "configuring", 00:29:27.809 "raid_level": "raid5f", 00:29:27.809 "superblock": true, 00:29:27.809 "num_base_bdevs": 3, 00:29:27.809 "num_base_bdevs_discovered": 2, 00:29:27.809 "num_base_bdevs_operational": 3, 00:29:27.809 "base_bdevs_list": [ 00:29:27.809 { 00:29:27.809 "name": "BaseBdev1", 00:29:27.809 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:27.809 "is_configured": true, 00:29:27.809 "data_offset": 2048, 00:29:27.809 "data_size": 63488 00:29:27.809 }, 00:29:27.810 { 00:29:27.810 "name": null, 00:29:27.810 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:27.810 "is_configured": false, 00:29:27.810 "data_offset": 0, 00:29:27.810 "data_size": 63488 00:29:27.810 }, 00:29:27.810 { 00:29:27.810 "name": "BaseBdev3", 00:29:27.810 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:27.810 "is_configured": true, 00:29:27.810 "data_offset": 2048, 00:29:27.810 "data_size": 63488 00:29:27.810 } 00:29:27.810 ] 00:29:27.810 }' 00:29:27.810 07:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.810 07:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.068 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:28.068 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.068 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.068 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.068 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.327 [2024-11-20 07:26:52.368322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.327 "name": "Existed_Raid", 00:29:28.327 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:28.327 "strip_size_kb": 64, 00:29:28.327 "state": "configuring", 00:29:28.327 "raid_level": "raid5f", 00:29:28.327 "superblock": true, 00:29:28.327 "num_base_bdevs": 3, 00:29:28.327 "num_base_bdevs_discovered": 1, 00:29:28.327 "num_base_bdevs_operational": 3, 00:29:28.327 "base_bdevs_list": [ 00:29:28.327 { 00:29:28.327 "name": "BaseBdev1", 00:29:28.327 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:28.327 "is_configured": true, 00:29:28.327 "data_offset": 2048, 00:29:28.327 "data_size": 63488 00:29:28.327 }, 00:29:28.327 { 00:29:28.327 "name": null, 00:29:28.327 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:28.327 "is_configured": false, 00:29:28.327 "data_offset": 0, 00:29:28.327 "data_size": 63488 00:29:28.327 }, 00:29:28.327 { 00:29:28.327 "name": null, 00:29:28.327 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:28.327 "is_configured": false, 00:29:28.327 "data_offset": 0, 00:29:28.327 "data_size": 63488 00:29:28.327 } 00:29:28.327 ] 00:29:28.327 }' 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.327 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.895 [2024-11-20 07:26:52.980473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.895 07:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.895 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.895 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.895 "name": "Existed_Raid", 00:29:28.895 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:28.895 "strip_size_kb": 64, 00:29:28.895 "state": "configuring", 00:29:28.896 "raid_level": "raid5f", 00:29:28.896 "superblock": true, 00:29:28.896 "num_base_bdevs": 3, 00:29:28.896 "num_base_bdevs_discovered": 2, 00:29:28.896 "num_base_bdevs_operational": 3, 00:29:28.896 "base_bdevs_list": [ 00:29:28.896 { 00:29:28.896 "name": "BaseBdev1", 00:29:28.896 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:28.896 "is_configured": true, 00:29:28.896 "data_offset": 2048, 00:29:28.896 "data_size": 63488 00:29:28.896 }, 00:29:28.896 { 00:29:28.896 "name": null, 00:29:28.896 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:28.896 "is_configured": false, 00:29:28.896 "data_offset": 0, 00:29:28.896 "data_size": 63488 00:29:28.896 }, 00:29:28.896 { 00:29:28.896 "name": "BaseBdev3", 00:29:28.896 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:28.896 "is_configured": true, 00:29:28.896 "data_offset": 2048, 00:29:28.896 "data_size": 63488 00:29:28.896 } 00:29:28.896 ] 00:29:28.896 }' 00:29:28.896 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.896 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.470 [2024-11-20 07:26:53.560681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.470 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.470 "name": "Existed_Raid", 00:29:29.470 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:29.470 "strip_size_kb": 64, 00:29:29.470 "state": "configuring", 00:29:29.470 "raid_level": "raid5f", 00:29:29.470 "superblock": true, 00:29:29.470 "num_base_bdevs": 3, 00:29:29.470 "num_base_bdevs_discovered": 1, 00:29:29.470 "num_base_bdevs_operational": 3, 00:29:29.470 "base_bdevs_list": [ 00:29:29.470 { 00:29:29.470 "name": null, 00:29:29.470 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:29.470 "is_configured": false, 00:29:29.470 "data_offset": 0, 00:29:29.470 "data_size": 63488 00:29:29.470 }, 00:29:29.470 { 00:29:29.470 "name": null, 00:29:29.470 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:29.470 "is_configured": false, 00:29:29.470 "data_offset": 0, 00:29:29.470 "data_size": 63488 00:29:29.470 }, 00:29:29.470 { 00:29:29.470 "name": "BaseBdev3", 00:29:29.470 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:29.470 "is_configured": true, 00:29:29.470 "data_offset": 2048, 00:29:29.470 "data_size": 63488 00:29:29.470 } 00:29:29.470 ] 00:29:29.470 }' 00:29:29.471 07:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.471 07:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.050 [2024-11-20 07:26:54.198821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:30.050 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:30.051 "name": "Existed_Raid", 00:29:30.051 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:30.051 "strip_size_kb": 64, 00:29:30.051 "state": "configuring", 00:29:30.051 "raid_level": "raid5f", 00:29:30.051 "superblock": true, 00:29:30.051 "num_base_bdevs": 3, 00:29:30.051 "num_base_bdevs_discovered": 2, 00:29:30.051 "num_base_bdevs_operational": 3, 00:29:30.051 "base_bdevs_list": [ 00:29:30.051 { 00:29:30.051 "name": null, 00:29:30.051 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:30.051 "is_configured": false, 00:29:30.051 "data_offset": 0, 00:29:30.051 "data_size": 63488 00:29:30.051 }, 00:29:30.051 { 00:29:30.051 "name": "BaseBdev2", 00:29:30.051 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:30.051 "is_configured": true, 00:29:30.051 "data_offset": 2048, 00:29:30.051 "data_size": 63488 00:29:30.051 }, 00:29:30.051 { 00:29:30.051 "name": "BaseBdev3", 00:29:30.051 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:30.051 "is_configured": true, 00:29:30.051 "data_offset": 2048, 00:29:30.051 "data_size": 63488 00:29:30.051 } 00:29:30.051 ] 00:29:30.051 }' 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:30.051 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4aafa1bb-7e43-4b53-9167-a70ea457f159 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.618 [2024-11-20 07:26:54.870273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:30.618 [2024-11-20 07:26:54.870737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:30.618 [2024-11-20 07:26:54.870766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:30.618 NewBaseBdev 00:29:30.618 [2024-11-20 07:26:54.871142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.618 [2024-11-20 07:26:54.875806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:30.618 [2024-11-20 07:26:54.875829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:30.618 [2024-11-20 07:26:54.876032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:30.618 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.619 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.619 [ 00:29:30.619 { 00:29:30.619 "name": "NewBaseBdev", 00:29:30.619 "aliases": [ 00:29:30.619 "4aafa1bb-7e43-4b53-9167-a70ea457f159" 00:29:30.619 ], 00:29:30.619 "product_name": "Malloc disk", 00:29:30.619 "block_size": 512, 00:29:30.619 "num_blocks": 65536, 00:29:30.619 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:30.619 "assigned_rate_limits": { 00:29:30.619 "rw_ios_per_sec": 0, 00:29:30.619 "rw_mbytes_per_sec": 0, 00:29:30.619 "r_mbytes_per_sec": 0, 00:29:30.619 "w_mbytes_per_sec": 0 00:29:30.619 }, 00:29:30.619 "claimed": true, 00:29:30.619 "claim_type": "exclusive_write", 00:29:30.619 "zoned": false, 00:29:30.619 "supported_io_types": { 00:29:30.619 "read": true, 00:29:30.619 "write": true, 00:29:30.619 "unmap": true, 00:29:30.619 "flush": true, 00:29:30.619 "reset": true, 00:29:30.619 "nvme_admin": false, 00:29:30.877 "nvme_io": false, 00:29:30.878 "nvme_io_md": false, 00:29:30.878 "write_zeroes": true, 00:29:30.878 "zcopy": true, 00:29:30.878 "get_zone_info": false, 00:29:30.878 "zone_management": false, 00:29:30.878 "zone_append": false, 00:29:30.878 "compare": false, 00:29:30.878 "compare_and_write": false, 00:29:30.878 "abort": true, 00:29:30.878 "seek_hole": false, 00:29:30.878 "seek_data": false, 00:29:30.878 "copy": true, 00:29:30.878 "nvme_iov_md": false 00:29:30.878 }, 00:29:30.878 "memory_domains": [ 00:29:30.878 { 00:29:30.878 "dma_device_id": "system", 00:29:30.878 "dma_device_type": 1 00:29:30.878 }, 00:29:30.878 { 00:29:30.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.878 "dma_device_type": 2 00:29:30.878 } 00:29:30.878 ], 00:29:30.878 "driver_specific": {} 00:29:30.878 } 00:29:30.878 ] 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:30.878 "name": "Existed_Raid", 00:29:30.878 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:30.878 "strip_size_kb": 64, 00:29:30.878 "state": "online", 00:29:30.878 "raid_level": "raid5f", 00:29:30.878 "superblock": true, 00:29:30.878 "num_base_bdevs": 3, 00:29:30.878 "num_base_bdevs_discovered": 3, 00:29:30.878 "num_base_bdevs_operational": 3, 00:29:30.878 "base_bdevs_list": [ 00:29:30.878 { 00:29:30.878 "name": "NewBaseBdev", 00:29:30.878 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:30.878 "is_configured": true, 00:29:30.878 "data_offset": 2048, 00:29:30.878 "data_size": 63488 00:29:30.878 }, 00:29:30.878 { 00:29:30.878 "name": "BaseBdev2", 00:29:30.878 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:30.878 "is_configured": true, 00:29:30.878 "data_offset": 2048, 00:29:30.878 "data_size": 63488 00:29:30.878 }, 00:29:30.878 { 00:29:30.878 "name": "BaseBdev3", 00:29:30.878 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:30.878 "is_configured": true, 00:29:30.878 "data_offset": 2048, 00:29:30.878 "data_size": 63488 00:29:30.878 } 00:29:30.878 ] 00:29:30.878 }' 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:30.878 07:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.445 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:31.445 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:31.445 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:31.445 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:31.445 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:31.445 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:31.446 [2024-11-20 07:26:55.446124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:31.446 "name": "Existed_Raid", 00:29:31.446 "aliases": [ 00:29:31.446 "cb7ddc3d-30a1-4da7-9e55-b891e27520e4" 00:29:31.446 ], 00:29:31.446 "product_name": "Raid Volume", 00:29:31.446 "block_size": 512, 00:29:31.446 "num_blocks": 126976, 00:29:31.446 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:31.446 "assigned_rate_limits": { 00:29:31.446 "rw_ios_per_sec": 0, 00:29:31.446 "rw_mbytes_per_sec": 0, 00:29:31.446 "r_mbytes_per_sec": 0, 00:29:31.446 "w_mbytes_per_sec": 0 00:29:31.446 }, 00:29:31.446 "claimed": false, 00:29:31.446 "zoned": false, 00:29:31.446 "supported_io_types": { 00:29:31.446 "read": true, 00:29:31.446 "write": true, 00:29:31.446 "unmap": false, 00:29:31.446 "flush": false, 00:29:31.446 "reset": true, 00:29:31.446 "nvme_admin": false, 00:29:31.446 "nvme_io": false, 00:29:31.446 "nvme_io_md": false, 00:29:31.446 "write_zeroes": true, 00:29:31.446 "zcopy": false, 00:29:31.446 "get_zone_info": false, 00:29:31.446 "zone_management": false, 00:29:31.446 "zone_append": false, 00:29:31.446 "compare": false, 00:29:31.446 "compare_and_write": false, 00:29:31.446 "abort": false, 00:29:31.446 "seek_hole": false, 00:29:31.446 "seek_data": false, 00:29:31.446 "copy": false, 00:29:31.446 "nvme_iov_md": false 00:29:31.446 }, 00:29:31.446 "driver_specific": { 00:29:31.446 "raid": { 00:29:31.446 "uuid": "cb7ddc3d-30a1-4da7-9e55-b891e27520e4", 00:29:31.446 "strip_size_kb": 64, 00:29:31.446 "state": "online", 00:29:31.446 "raid_level": "raid5f", 00:29:31.446 "superblock": true, 00:29:31.446 "num_base_bdevs": 3, 00:29:31.446 "num_base_bdevs_discovered": 3, 00:29:31.446 "num_base_bdevs_operational": 3, 00:29:31.446 "base_bdevs_list": [ 00:29:31.446 { 00:29:31.446 "name": "NewBaseBdev", 00:29:31.446 "uuid": "4aafa1bb-7e43-4b53-9167-a70ea457f159", 00:29:31.446 "is_configured": true, 00:29:31.446 "data_offset": 2048, 00:29:31.446 "data_size": 63488 00:29:31.446 }, 00:29:31.446 { 00:29:31.446 "name": "BaseBdev2", 00:29:31.446 "uuid": "4d19ef7b-4851-4f9c-afdd-fa2374499a79", 00:29:31.446 "is_configured": true, 00:29:31.446 "data_offset": 2048, 00:29:31.446 "data_size": 63488 00:29:31.446 }, 00:29:31.446 { 00:29:31.446 "name": "BaseBdev3", 00:29:31.446 "uuid": "6e0fc51d-3153-45f9-88ee-d268c98552be", 00:29:31.446 "is_configured": true, 00:29:31.446 "data_offset": 2048, 00:29:31.446 "data_size": 63488 00:29:31.446 } 00:29:31.446 ] 00:29:31.446 } 00:29:31.446 } 00:29:31.446 }' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:31.446 BaseBdev2 00:29:31.446 BaseBdev3' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.446 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.705 [2024-11-20 07:26:55.765905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:31.705 [2024-11-20 07:26:55.765949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:31.705 [2024-11-20 07:26:55.766056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:31.705 [2024-11-20 07:26:55.766354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:31.705 [2024-11-20 07:26:55.766374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81056 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81056 ']' 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81056 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81056 00:29:31.705 killing process with pid 81056 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81056' 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81056 00:29:31.705 [2024-11-20 07:26:55.803629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:31.705 07:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81056 00:29:31.963 [2024-11-20 07:26:56.021874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:32.900 07:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:32.900 00:29:32.900 real 0m11.685s 00:29:32.900 user 0m19.570s 00:29:32.900 sys 0m1.632s 00:29:32.900 07:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.900 ************************************ 00:29:32.900 END TEST raid5f_state_function_test_sb 00:29:32.900 ************************************ 00:29:32.900 07:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.900 07:26:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:29:32.900 07:26:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:32.900 07:26:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.900 07:26:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:32.900 ************************************ 00:29:32.900 START TEST raid5f_superblock_test 00:29:32.900 ************************************ 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81687 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81687 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:32.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81687 ']' 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.900 07:26:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.900 [2024-11-20 07:26:57.164908] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:32.900 [2024-11-20 07:26:57.165211] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81687 ] 00:29:33.160 [2024-11-20 07:26:57.351517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.418 [2024-11-20 07:26:57.475661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.418 [2024-11-20 07:26:57.663855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.418 [2024-11-20 07:26:57.663931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.987 malloc1 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.987 [2024-11-20 07:26:58.182608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:33.987 [2024-11-20 07:26:58.182888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.987 [2024-11-20 07:26:58.183011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:33.987 [2024-11-20 07:26:58.183310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.987 [2024-11-20 07:26:58.186126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.987 [2024-11-20 07:26:58.186343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:33.987 pt1 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.987 malloc2 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.987 [2024-11-20 07:26:58.236292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:33.987 [2024-11-20 07:26:58.236516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.987 [2024-11-20 07:26:58.236616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:33.987 [2024-11-20 07:26:58.236732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.987 [2024-11-20 07:26:58.239585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.987 [2024-11-20 07:26:58.239801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:33.987 pt2 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:29:33.987 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.988 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.251 malloc3 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.251 [2024-11-20 07:26:58.297409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:34.251 [2024-11-20 07:26:58.297683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.251 [2024-11-20 07:26:58.297730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:34.251 [2024-11-20 07:26:58.297746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.251 [2024-11-20 07:26:58.300551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.251 [2024-11-20 07:26:58.300620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:34.251 pt3 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.251 [2024-11-20 07:26:58.309512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:34.251 [2024-11-20 07:26:58.311957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:34.251 [2024-11-20 07:26:58.312040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:34.251 [2024-11-20 07:26:58.312248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:34.251 [2024-11-20 07:26:58.312274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:34.251 [2024-11-20 07:26:58.312570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:34.251 [2024-11-20 07:26:58.317282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:34.251 [2024-11-20 07:26:58.317304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:34.251 [2024-11-20 07:26:58.317567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.251 "name": "raid_bdev1", 00:29:34.251 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:34.251 "strip_size_kb": 64, 00:29:34.251 "state": "online", 00:29:34.251 "raid_level": "raid5f", 00:29:34.251 "superblock": true, 00:29:34.251 "num_base_bdevs": 3, 00:29:34.251 "num_base_bdevs_discovered": 3, 00:29:34.251 "num_base_bdevs_operational": 3, 00:29:34.251 "base_bdevs_list": [ 00:29:34.251 { 00:29:34.251 "name": "pt1", 00:29:34.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:34.251 "is_configured": true, 00:29:34.251 "data_offset": 2048, 00:29:34.251 "data_size": 63488 00:29:34.251 }, 00:29:34.251 { 00:29:34.251 "name": "pt2", 00:29:34.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:34.251 "is_configured": true, 00:29:34.251 "data_offset": 2048, 00:29:34.251 "data_size": 63488 00:29:34.251 }, 00:29:34.251 { 00:29:34.251 "name": "pt3", 00:29:34.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:34.251 "is_configured": true, 00:29:34.251 "data_offset": 2048, 00:29:34.251 "data_size": 63488 00:29:34.251 } 00:29:34.251 ] 00:29:34.251 }' 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.251 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.821 [2024-11-20 07:26:58.843388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.821 "name": "raid_bdev1", 00:29:34.821 "aliases": [ 00:29:34.821 "de784780-efc1-4e7c-b1ba-d48c7487d3e8" 00:29:34.821 ], 00:29:34.821 "product_name": "Raid Volume", 00:29:34.821 "block_size": 512, 00:29:34.821 "num_blocks": 126976, 00:29:34.821 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:34.821 "assigned_rate_limits": { 00:29:34.821 "rw_ios_per_sec": 0, 00:29:34.821 "rw_mbytes_per_sec": 0, 00:29:34.821 "r_mbytes_per_sec": 0, 00:29:34.821 "w_mbytes_per_sec": 0 00:29:34.821 }, 00:29:34.821 "claimed": false, 00:29:34.821 "zoned": false, 00:29:34.821 "supported_io_types": { 00:29:34.821 "read": true, 00:29:34.821 "write": true, 00:29:34.821 "unmap": false, 00:29:34.821 "flush": false, 00:29:34.821 "reset": true, 00:29:34.821 "nvme_admin": false, 00:29:34.821 "nvme_io": false, 00:29:34.821 "nvme_io_md": false, 00:29:34.821 "write_zeroes": true, 00:29:34.821 "zcopy": false, 00:29:34.821 "get_zone_info": false, 00:29:34.821 "zone_management": false, 00:29:34.821 "zone_append": false, 00:29:34.821 "compare": false, 00:29:34.821 "compare_and_write": false, 00:29:34.821 "abort": false, 00:29:34.821 "seek_hole": false, 00:29:34.821 "seek_data": false, 00:29:34.821 "copy": false, 00:29:34.821 "nvme_iov_md": false 00:29:34.821 }, 00:29:34.821 "driver_specific": { 00:29:34.821 "raid": { 00:29:34.821 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:34.821 "strip_size_kb": 64, 00:29:34.821 "state": "online", 00:29:34.821 "raid_level": "raid5f", 00:29:34.821 "superblock": true, 00:29:34.821 "num_base_bdevs": 3, 00:29:34.821 "num_base_bdevs_discovered": 3, 00:29:34.821 "num_base_bdevs_operational": 3, 00:29:34.821 "base_bdevs_list": [ 00:29:34.821 { 00:29:34.821 "name": "pt1", 00:29:34.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:34.821 "is_configured": true, 00:29:34.821 "data_offset": 2048, 00:29:34.821 "data_size": 63488 00:29:34.821 }, 00:29:34.821 { 00:29:34.821 "name": "pt2", 00:29:34.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:34.821 "is_configured": true, 00:29:34.821 "data_offset": 2048, 00:29:34.821 "data_size": 63488 00:29:34.821 }, 00:29:34.821 { 00:29:34.821 "name": "pt3", 00:29:34.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:34.821 "is_configured": true, 00:29:34.821 "data_offset": 2048, 00:29:34.821 "data_size": 63488 00:29:34.821 } 00:29:34.821 ] 00:29:34.821 } 00:29:34.821 } 00:29:34.821 }' 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:34.821 pt2 00:29:34.821 pt3' 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:34.821 07:26:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.821 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 [2024-11-20 07:26:59.179439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de784780-efc1-4e7c-b1ba-d48c7487d3e8 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de784780-efc1-4e7c-b1ba-d48c7487d3e8 ']' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 [2024-11-20 07:26:59.231210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.081 [2024-11-20 07:26:59.231456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:35.081 [2024-11-20 07:26:59.231585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:35.081 [2024-11-20 07:26:59.231760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:35.081 [2024-11-20 07:26:59.231780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.081 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.341 [2024-11-20 07:26:59.383379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:35.341 [2024-11-20 07:26:59.385924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:35.341 [2024-11-20 07:26:59.386155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:35.341 [2024-11-20 07:26:59.386274] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:35.341 [2024-11-20 07:26:59.386483] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:35.341 [2024-11-20 07:26:59.386767] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:35.341 [2024-11-20 07:26:59.387032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.341 [2024-11-20 07:26:59.387241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:35.341 request: 00:29:35.341 { 00:29:35.341 "name": "raid_bdev1", 00:29:35.341 "raid_level": "raid5f", 00:29:35.341 "base_bdevs": [ 00:29:35.341 "malloc1", 00:29:35.341 "malloc2", 00:29:35.341 "malloc3" 00:29:35.341 ], 00:29:35.341 "strip_size_kb": 64, 00:29:35.341 "superblock": false, 00:29:35.341 "method": "bdev_raid_create", 00:29:35.341 "req_id": 1 00:29:35.341 } 00:29:35.341 Got JSON-RPC error response 00:29:35.341 response: 00:29:35.341 { 00:29:35.341 "code": -17, 00:29:35.341 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:35.341 } 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.341 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.341 [2024-11-20 07:26:59.455711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:35.341 [2024-11-20 07:26:59.455798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.341 [2024-11-20 07:26:59.455828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:35.341 [2024-11-20 07:26:59.455842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.342 [2024-11-20 07:26:59.458777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.342 [2024-11-20 07:26:59.458819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:35.342 [2024-11-20 07:26:59.458995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:35.342 [2024-11-20 07:26:59.459059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:35.342 pt1 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.342 "name": "raid_bdev1", 00:29:35.342 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:35.342 "strip_size_kb": 64, 00:29:35.342 "state": "configuring", 00:29:35.342 "raid_level": "raid5f", 00:29:35.342 "superblock": true, 00:29:35.342 "num_base_bdevs": 3, 00:29:35.342 "num_base_bdevs_discovered": 1, 00:29:35.342 "num_base_bdevs_operational": 3, 00:29:35.342 "base_bdevs_list": [ 00:29:35.342 { 00:29:35.342 "name": "pt1", 00:29:35.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:35.342 "is_configured": true, 00:29:35.342 "data_offset": 2048, 00:29:35.342 "data_size": 63488 00:29:35.342 }, 00:29:35.342 { 00:29:35.342 "name": null, 00:29:35.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.342 "is_configured": false, 00:29:35.342 "data_offset": 2048, 00:29:35.342 "data_size": 63488 00:29:35.342 }, 00:29:35.342 { 00:29:35.342 "name": null, 00:29:35.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:35.342 "is_configured": false, 00:29:35.342 "data_offset": 2048, 00:29:35.342 "data_size": 63488 00:29:35.342 } 00:29:35.342 ] 00:29:35.342 }' 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.342 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.910 [2024-11-20 07:26:59.987873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:35.910 [2024-11-20 07:26:59.987992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.910 [2024-11-20 07:26:59.988039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:35.910 [2024-11-20 07:26:59.988052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.910 [2024-11-20 07:26:59.988562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.910 [2024-11-20 07:26:59.988642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:35.910 [2024-11-20 07:26:59.988753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:35.910 [2024-11-20 07:26:59.988784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:35.910 pt2 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.910 [2024-11-20 07:26:59.995858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:35.910 07:26:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.910 "name": "raid_bdev1", 00:29:35.910 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:35.910 "strip_size_kb": 64, 00:29:35.910 "state": "configuring", 00:29:35.910 "raid_level": "raid5f", 00:29:35.910 "superblock": true, 00:29:35.910 "num_base_bdevs": 3, 00:29:35.910 "num_base_bdevs_discovered": 1, 00:29:35.910 "num_base_bdevs_operational": 3, 00:29:35.910 "base_bdevs_list": [ 00:29:35.910 { 00:29:35.910 "name": "pt1", 00:29:35.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:35.910 "is_configured": true, 00:29:35.910 "data_offset": 2048, 00:29:35.910 "data_size": 63488 00:29:35.910 }, 00:29:35.910 { 00:29:35.910 "name": null, 00:29:35.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.910 "is_configured": false, 00:29:35.910 "data_offset": 0, 00:29:35.910 "data_size": 63488 00:29:35.910 }, 00:29:35.910 { 00:29:35.910 "name": null, 00:29:35.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:35.910 "is_configured": false, 00:29:35.910 "data_offset": 2048, 00:29:35.910 "data_size": 63488 00:29:35.910 } 00:29:35.910 ] 00:29:35.910 }' 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.910 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.505 [2024-11-20 07:27:00.532054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:36.505 [2024-11-20 07:27:00.532151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.505 [2024-11-20 07:27:00.532177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:36.505 [2024-11-20 07:27:00.532194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.505 [2024-11-20 07:27:00.532835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.505 [2024-11-20 07:27:00.532866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:36.505 [2024-11-20 07:27:00.532967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:36.505 [2024-11-20 07:27:00.533011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:36.505 pt2 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.505 [2024-11-20 07:27:00.544089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:36.505 [2024-11-20 07:27:00.544334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.505 [2024-11-20 07:27:00.544368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:36.505 [2024-11-20 07:27:00.544386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.505 [2024-11-20 07:27:00.544976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.505 [2024-11-20 07:27:00.545046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:36.505 [2024-11-20 07:27:00.545144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:36.505 [2024-11-20 07:27:00.545178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:36.505 [2024-11-20 07:27:00.545332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:36.505 [2024-11-20 07:27:00.545359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:36.505 [2024-11-20 07:27:00.545717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:36.505 [2024-11-20 07:27:00.550503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:36.505 [2024-11-20 07:27:00.550525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:36.505 [2024-11-20 07:27:00.550810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.505 pt3 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:36.505 "name": "raid_bdev1", 00:29:36.505 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:36.505 "strip_size_kb": 64, 00:29:36.505 "state": "online", 00:29:36.505 "raid_level": "raid5f", 00:29:36.505 "superblock": true, 00:29:36.505 "num_base_bdevs": 3, 00:29:36.505 "num_base_bdevs_discovered": 3, 00:29:36.505 "num_base_bdevs_operational": 3, 00:29:36.505 "base_bdevs_list": [ 00:29:36.505 { 00:29:36.505 "name": "pt1", 00:29:36.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:36.505 "is_configured": true, 00:29:36.505 "data_offset": 2048, 00:29:36.505 "data_size": 63488 00:29:36.505 }, 00:29:36.505 { 00:29:36.505 "name": "pt2", 00:29:36.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:36.505 "is_configured": true, 00:29:36.505 "data_offset": 2048, 00:29:36.505 "data_size": 63488 00:29:36.505 }, 00:29:36.505 { 00:29:36.505 "name": "pt3", 00:29:36.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:36.505 "is_configured": true, 00:29:36.505 "data_offset": 2048, 00:29:36.505 "data_size": 63488 00:29:36.505 } 00:29:36.505 ] 00:29:36.505 }' 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:36.505 07:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:37.086 [2024-11-20 07:27:01.113174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.086 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.086 "name": "raid_bdev1", 00:29:37.086 "aliases": [ 00:29:37.086 "de784780-efc1-4e7c-b1ba-d48c7487d3e8" 00:29:37.086 ], 00:29:37.086 "product_name": "Raid Volume", 00:29:37.086 "block_size": 512, 00:29:37.086 "num_blocks": 126976, 00:29:37.086 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:37.086 "assigned_rate_limits": { 00:29:37.086 "rw_ios_per_sec": 0, 00:29:37.086 "rw_mbytes_per_sec": 0, 00:29:37.086 "r_mbytes_per_sec": 0, 00:29:37.086 "w_mbytes_per_sec": 0 00:29:37.086 }, 00:29:37.086 "claimed": false, 00:29:37.086 "zoned": false, 00:29:37.086 "supported_io_types": { 00:29:37.086 "read": true, 00:29:37.086 "write": true, 00:29:37.086 "unmap": false, 00:29:37.086 "flush": false, 00:29:37.086 "reset": true, 00:29:37.086 "nvme_admin": false, 00:29:37.086 "nvme_io": false, 00:29:37.086 "nvme_io_md": false, 00:29:37.086 "write_zeroes": true, 00:29:37.086 "zcopy": false, 00:29:37.086 "get_zone_info": false, 00:29:37.086 "zone_management": false, 00:29:37.086 "zone_append": false, 00:29:37.086 "compare": false, 00:29:37.086 "compare_and_write": false, 00:29:37.086 "abort": false, 00:29:37.086 "seek_hole": false, 00:29:37.086 "seek_data": false, 00:29:37.086 "copy": false, 00:29:37.086 "nvme_iov_md": false 00:29:37.086 }, 00:29:37.086 "driver_specific": { 00:29:37.086 "raid": { 00:29:37.086 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:37.086 "strip_size_kb": 64, 00:29:37.087 "state": "online", 00:29:37.087 "raid_level": "raid5f", 00:29:37.087 "superblock": true, 00:29:37.087 "num_base_bdevs": 3, 00:29:37.087 "num_base_bdevs_discovered": 3, 00:29:37.087 "num_base_bdevs_operational": 3, 00:29:37.087 "base_bdevs_list": [ 00:29:37.087 { 00:29:37.087 "name": "pt1", 00:29:37.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:37.087 "is_configured": true, 00:29:37.087 "data_offset": 2048, 00:29:37.087 "data_size": 63488 00:29:37.087 }, 00:29:37.087 { 00:29:37.087 "name": "pt2", 00:29:37.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.087 "is_configured": true, 00:29:37.087 "data_offset": 2048, 00:29:37.087 "data_size": 63488 00:29:37.087 }, 00:29:37.087 { 00:29:37.087 "name": "pt3", 00:29:37.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:37.087 "is_configured": true, 00:29:37.087 "data_offset": 2048, 00:29:37.087 "data_size": 63488 00:29:37.087 } 00:29:37.087 ] 00:29:37.087 } 00:29:37.087 } 00:29:37.087 }' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:37.087 pt2 00:29:37.087 pt3' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:37.087 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.088 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.088 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:37.350 [2024-11-20 07:27:01.457220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de784780-efc1-4e7c-b1ba-d48c7487d3e8 '!=' de784780-efc1-4e7c-b1ba-d48c7487d3e8 ']' 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.350 [2024-11-20 07:27:01.513101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.350 "name": "raid_bdev1", 00:29:37.350 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:37.350 "strip_size_kb": 64, 00:29:37.350 "state": "online", 00:29:37.350 "raid_level": "raid5f", 00:29:37.350 "superblock": true, 00:29:37.350 "num_base_bdevs": 3, 00:29:37.350 "num_base_bdevs_discovered": 2, 00:29:37.350 "num_base_bdevs_operational": 2, 00:29:37.350 "base_bdevs_list": [ 00:29:37.350 { 00:29:37.350 "name": null, 00:29:37.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.350 "is_configured": false, 00:29:37.350 "data_offset": 0, 00:29:37.350 "data_size": 63488 00:29:37.350 }, 00:29:37.350 { 00:29:37.350 "name": "pt2", 00:29:37.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.350 "is_configured": true, 00:29:37.350 "data_offset": 2048, 00:29:37.350 "data_size": 63488 00:29:37.350 }, 00:29:37.350 { 00:29:37.350 "name": "pt3", 00:29:37.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:37.350 "is_configured": true, 00:29:37.350 "data_offset": 2048, 00:29:37.350 "data_size": 63488 00:29:37.350 } 00:29:37.350 ] 00:29:37.350 }' 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.350 07:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 [2024-11-20 07:27:02.037178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:37.919 [2024-11-20 07:27:02.037210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:37.919 [2024-11-20 07:27:02.037300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:37.919 [2024-11-20 07:27:02.037372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:37.919 [2024-11-20 07:27:02.037392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 [2024-11-20 07:27:02.125156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:37.919 [2024-11-20 07:27:02.125242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:37.919 [2024-11-20 07:27:02.125267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:37.919 [2024-11-20 07:27:02.125282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:37.919 [2024-11-20 07:27:02.128288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:37.919 [2024-11-20 07:27:02.128347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:37.919 [2024-11-20 07:27:02.128446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:37.919 [2024-11-20 07:27:02.128502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:37.919 pt2 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.919 "name": "raid_bdev1", 00:29:37.919 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:37.919 "strip_size_kb": 64, 00:29:37.919 "state": "configuring", 00:29:37.919 "raid_level": "raid5f", 00:29:37.919 "superblock": true, 00:29:37.919 "num_base_bdevs": 3, 00:29:37.919 "num_base_bdevs_discovered": 1, 00:29:37.919 "num_base_bdevs_operational": 2, 00:29:37.919 "base_bdevs_list": [ 00:29:37.919 { 00:29:37.919 "name": null, 00:29:37.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.919 "is_configured": false, 00:29:37.919 "data_offset": 2048, 00:29:37.919 "data_size": 63488 00:29:37.919 }, 00:29:37.919 { 00:29:37.919 "name": "pt2", 00:29:37.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.919 "is_configured": true, 00:29:37.919 "data_offset": 2048, 00:29:37.919 "data_size": 63488 00:29:37.919 }, 00:29:37.919 { 00:29:37.919 "name": null, 00:29:37.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:37.919 "is_configured": false, 00:29:37.919 "data_offset": 2048, 00:29:37.919 "data_size": 63488 00:29:37.919 } 00:29:37.919 ] 00:29:37.919 }' 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.919 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.487 [2024-11-20 07:27:02.637286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:38.487 [2024-11-20 07:27:02.637412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.487 [2024-11-20 07:27:02.637446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:38.487 [2024-11-20 07:27:02.637463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.487 [2024-11-20 07:27:02.638178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.487 [2024-11-20 07:27:02.638219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:38.487 [2024-11-20 07:27:02.638327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:38.487 [2024-11-20 07:27:02.638398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:38.487 [2024-11-20 07:27:02.638630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:38.487 [2024-11-20 07:27:02.638651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:38.487 [2024-11-20 07:27:02.639014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:38.487 [2024-11-20 07:27:02.643929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:38.487 [2024-11-20 07:27:02.644135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:38.487 [2024-11-20 07:27:02.644522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:38.487 pt3 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:38.487 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:38.488 "name": "raid_bdev1", 00:29:38.488 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:38.488 "strip_size_kb": 64, 00:29:38.488 "state": "online", 00:29:38.488 "raid_level": "raid5f", 00:29:38.488 "superblock": true, 00:29:38.488 "num_base_bdevs": 3, 00:29:38.488 "num_base_bdevs_discovered": 2, 00:29:38.488 "num_base_bdevs_operational": 2, 00:29:38.488 "base_bdevs_list": [ 00:29:38.488 { 00:29:38.488 "name": null, 00:29:38.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.488 "is_configured": false, 00:29:38.488 "data_offset": 2048, 00:29:38.488 "data_size": 63488 00:29:38.488 }, 00:29:38.488 { 00:29:38.488 "name": "pt2", 00:29:38.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:38.488 "is_configured": true, 00:29:38.488 "data_offset": 2048, 00:29:38.488 "data_size": 63488 00:29:38.488 }, 00:29:38.488 { 00:29:38.488 "name": "pt3", 00:29:38.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:38.488 "is_configured": true, 00:29:38.488 "data_offset": 2048, 00:29:38.488 "data_size": 63488 00:29:38.488 } 00:29:38.488 ] 00:29:38.488 }' 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:38.488 07:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.056 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.057 [2024-11-20 07:27:03.174488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:39.057 [2024-11-20 07:27:03.174524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:39.057 [2024-11-20 07:27:03.174654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:39.057 [2024-11-20 07:27:03.174748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:39.057 [2024-11-20 07:27:03.174763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.057 [2024-11-20 07:27:03.246537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:39.057 [2024-11-20 07:27:03.246646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.057 [2024-11-20 07:27:03.246676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:39.057 [2024-11-20 07:27:03.246690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.057 [2024-11-20 07:27:03.249517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.057 [2024-11-20 07:27:03.249753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:39.057 [2024-11-20 07:27:03.249868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:39.057 [2024-11-20 07:27:03.249925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:39.057 [2024-11-20 07:27:03.250139] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:39.057 [2024-11-20 07:27:03.250155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:39.057 [2024-11-20 07:27:03.250174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:39.057 [2024-11-20 07:27:03.250266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:39.057 pt1 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.057 "name": "raid_bdev1", 00:29:39.057 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:39.057 "strip_size_kb": 64, 00:29:39.057 "state": "configuring", 00:29:39.057 "raid_level": "raid5f", 00:29:39.057 "superblock": true, 00:29:39.057 "num_base_bdevs": 3, 00:29:39.057 "num_base_bdevs_discovered": 1, 00:29:39.057 "num_base_bdevs_operational": 2, 00:29:39.057 "base_bdevs_list": [ 00:29:39.057 { 00:29:39.057 "name": null, 00:29:39.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.057 "is_configured": false, 00:29:39.057 "data_offset": 2048, 00:29:39.057 "data_size": 63488 00:29:39.057 }, 00:29:39.057 { 00:29:39.057 "name": "pt2", 00:29:39.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:39.057 "is_configured": true, 00:29:39.057 "data_offset": 2048, 00:29:39.057 "data_size": 63488 00:29:39.057 }, 00:29:39.057 { 00:29:39.057 "name": null, 00:29:39.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:39.057 "is_configured": false, 00:29:39.057 "data_offset": 2048, 00:29:39.057 "data_size": 63488 00:29:39.057 } 00:29:39.057 ] 00:29:39.057 }' 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.057 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.626 [2024-11-20 07:27:03.830798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:39.626 [2024-11-20 07:27:03.830873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.626 [2024-11-20 07:27:03.830905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:39.626 [2024-11-20 07:27:03.830919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.626 [2024-11-20 07:27:03.831655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.626 [2024-11-20 07:27:03.831698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:39.626 [2024-11-20 07:27:03.831820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:39.626 [2024-11-20 07:27:03.831851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:39.626 [2024-11-20 07:27:03.832047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:39.626 [2024-11-20 07:27:03.832061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:39.626 [2024-11-20 07:27:03.832348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:39.626 [2024-11-20 07:27:03.837299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:39.626 [2024-11-20 07:27:03.837328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:39.626 [2024-11-20 07:27:03.837677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.626 pt3 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.626 "name": "raid_bdev1", 00:29:39.626 "uuid": "de784780-efc1-4e7c-b1ba-d48c7487d3e8", 00:29:39.626 "strip_size_kb": 64, 00:29:39.626 "state": "online", 00:29:39.626 "raid_level": "raid5f", 00:29:39.626 "superblock": true, 00:29:39.626 "num_base_bdevs": 3, 00:29:39.626 "num_base_bdevs_discovered": 2, 00:29:39.626 "num_base_bdevs_operational": 2, 00:29:39.626 "base_bdevs_list": [ 00:29:39.626 { 00:29:39.626 "name": null, 00:29:39.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.626 "is_configured": false, 00:29:39.626 "data_offset": 2048, 00:29:39.626 "data_size": 63488 00:29:39.626 }, 00:29:39.626 { 00:29:39.626 "name": "pt2", 00:29:39.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:39.626 "is_configured": true, 00:29:39.626 "data_offset": 2048, 00:29:39.626 "data_size": 63488 00:29:39.626 }, 00:29:39.626 { 00:29:39.626 "name": "pt3", 00:29:39.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:39.626 "is_configured": true, 00:29:39.626 "data_offset": 2048, 00:29:39.626 "data_size": 63488 00:29:39.626 } 00:29:39.626 ] 00:29:39.626 }' 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.626 07:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.195 [2024-11-20 07:27:04.459994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:40.195 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' de784780-efc1-4e7c-b1ba-d48c7487d3e8 '!=' de784780-efc1-4e7c-b1ba-d48c7487d3e8 ']' 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81687 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81687 ']' 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81687 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81687 00:29:40.454 killing process with pid 81687 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81687' 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81687 00:29:40.454 [2024-11-20 07:27:04.534418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:40.454 07:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81687 00:29:40.454 [2024-11-20 07:27:04.534545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:40.454 [2024-11-20 07:27:04.534654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:40.454 [2024-11-20 07:27:04.534672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:40.713 [2024-11-20 07:27:04.783340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:41.651 07:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:41.651 00:29:41.651 real 0m8.694s 00:29:41.651 user 0m14.285s 00:29:41.651 sys 0m1.308s 00:29:41.651 ************************************ 00:29:41.651 END TEST raid5f_superblock_test 00:29:41.651 ************************************ 00:29:41.651 07:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.651 07:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.651 07:27:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:29:41.651 07:27:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:29:41.651 07:27:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:41.651 07:27:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.651 07:27:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:41.651 ************************************ 00:29:41.651 START TEST raid5f_rebuild_test 00:29:41.651 ************************************ 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82140 00:29:41.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82140 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82140 ']' 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.651 07:27:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.651 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:41.651 Zero copy mechanism will not be used. 00:29:41.651 [2024-11-20 07:27:05.924632] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:41.651 [2024-11-20 07:27:05.924846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82140 ] 00:29:41.910 [2024-11-20 07:27:06.110016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.203 [2024-11-20 07:27:06.234800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.203 [2024-11-20 07:27:06.413911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:42.203 [2024-11-20 07:27:06.413995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:42.793 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.793 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 BaseBdev1_malloc 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 [2024-11-20 07:27:06.891355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:42.794 [2024-11-20 07:27:06.891453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.794 [2024-11-20 07:27:06.891504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:42.794 [2024-11-20 07:27:06.891521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.794 [2024-11-20 07:27:06.894231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.794 [2024-11-20 07:27:06.894292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:42.794 BaseBdev1 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 BaseBdev2_malloc 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 [2024-11-20 07:27:06.943803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:42.794 [2024-11-20 07:27:06.943900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.794 [2024-11-20 07:27:06.943931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:42.794 [2024-11-20 07:27:06.943965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.794 [2024-11-20 07:27:06.946906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.794 [2024-11-20 07:27:06.946994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:42.794 BaseBdev2 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 BaseBdev3_malloc 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 [2024-11-20 07:27:07.011434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:42.794 [2024-11-20 07:27:07.011537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.794 [2024-11-20 07:27:07.011569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:42.794 [2024-11-20 07:27:07.011602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.794 [2024-11-20 07:27:07.014604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.794 [2024-11-20 07:27:07.014873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:42.794 BaseBdev3 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 spare_malloc 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 spare_delay 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.794 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.794 [2024-11-20 07:27:07.076417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:42.794 [2024-11-20 07:27:07.076495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.794 [2024-11-20 07:27:07.076526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:42.794 [2024-11-20 07:27:07.076543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.794 [2024-11-20 07:27:07.080048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.794 [2024-11-20 07:27:07.080098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:43.054 spare 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.054 [2024-11-20 07:27:07.088522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:43.054 [2024-11-20 07:27:07.090915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:43.054 [2024-11-20 07:27:07.091032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:43.054 [2024-11-20 07:27:07.091162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:43.054 [2024-11-20 07:27:07.091180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:43.054 [2024-11-20 07:27:07.091569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:43.054 [2024-11-20 07:27:07.096269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:43.054 [2024-11-20 07:27:07.096298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:43.054 [2024-11-20 07:27:07.096579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:43.054 "name": "raid_bdev1", 00:29:43.054 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:43.054 "strip_size_kb": 64, 00:29:43.054 "state": "online", 00:29:43.054 "raid_level": "raid5f", 00:29:43.054 "superblock": false, 00:29:43.054 "num_base_bdevs": 3, 00:29:43.054 "num_base_bdevs_discovered": 3, 00:29:43.054 "num_base_bdevs_operational": 3, 00:29:43.054 "base_bdevs_list": [ 00:29:43.054 { 00:29:43.054 "name": "BaseBdev1", 00:29:43.054 "uuid": "96c2fbdc-b99d-5308-a36d-70ebe7d66c82", 00:29:43.054 "is_configured": true, 00:29:43.054 "data_offset": 0, 00:29:43.054 "data_size": 65536 00:29:43.054 }, 00:29:43.054 { 00:29:43.054 "name": "BaseBdev2", 00:29:43.054 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:43.054 "is_configured": true, 00:29:43.054 "data_offset": 0, 00:29:43.054 "data_size": 65536 00:29:43.054 }, 00:29:43.054 { 00:29:43.054 "name": "BaseBdev3", 00:29:43.054 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:43.054 "is_configured": true, 00:29:43.054 "data_offset": 0, 00:29:43.054 "data_size": 65536 00:29:43.054 } 00:29:43.054 ] 00:29:43.054 }' 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:43.054 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.622 [2024-11-20 07:27:07.610899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:43.622 07:27:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:43.881 [2024-11-20 07:27:07.998882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:43.881 /dev/nbd0 00:29:43.881 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:43.881 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:43.881 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:43.882 1+0 records in 00:29:43.882 1+0 records out 00:29:43.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384005 s, 10.7 MB/s 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:29:43.882 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:29:44.451 512+0 records in 00:29:44.451 512+0 records out 00:29:44.451 67108864 bytes (67 MB, 64 MiB) copied, 0.453952 s, 148 MB/s 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:44.451 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:44.710 [2024-11-20 07:27:08.836502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:44.710 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.711 [2024-11-20 07:27:08.858455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:44.711 "name": "raid_bdev1", 00:29:44.711 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:44.711 "strip_size_kb": 64, 00:29:44.711 "state": "online", 00:29:44.711 "raid_level": "raid5f", 00:29:44.711 "superblock": false, 00:29:44.711 "num_base_bdevs": 3, 00:29:44.711 "num_base_bdevs_discovered": 2, 00:29:44.711 "num_base_bdevs_operational": 2, 00:29:44.711 "base_bdevs_list": [ 00:29:44.711 { 00:29:44.711 "name": null, 00:29:44.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.711 "is_configured": false, 00:29:44.711 "data_offset": 0, 00:29:44.711 "data_size": 65536 00:29:44.711 }, 00:29:44.711 { 00:29:44.711 "name": "BaseBdev2", 00:29:44.711 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:44.711 "is_configured": true, 00:29:44.711 "data_offset": 0, 00:29:44.711 "data_size": 65536 00:29:44.711 }, 00:29:44.711 { 00:29:44.711 "name": "BaseBdev3", 00:29:44.711 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:44.711 "is_configured": true, 00:29:44.711 "data_offset": 0, 00:29:44.711 "data_size": 65536 00:29:44.711 } 00:29:44.711 ] 00:29:44.711 }' 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:44.711 07:27:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.279 07:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:45.279 07:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.279 07:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.279 [2024-11-20 07:27:09.346739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.279 [2024-11-20 07:27:09.361470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:29:45.279 07:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.279 07:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:45.279 [2024-11-20 07:27:09.368973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:46.218 "name": "raid_bdev1", 00:29:46.218 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:46.218 "strip_size_kb": 64, 00:29:46.218 "state": "online", 00:29:46.218 "raid_level": "raid5f", 00:29:46.218 "superblock": false, 00:29:46.218 "num_base_bdevs": 3, 00:29:46.218 "num_base_bdevs_discovered": 3, 00:29:46.218 "num_base_bdevs_operational": 3, 00:29:46.218 "process": { 00:29:46.218 "type": "rebuild", 00:29:46.218 "target": "spare", 00:29:46.218 "progress": { 00:29:46.218 "blocks": 18432, 00:29:46.218 "percent": 14 00:29:46.218 } 00:29:46.218 }, 00:29:46.218 "base_bdevs_list": [ 00:29:46.218 { 00:29:46.218 "name": "spare", 00:29:46.218 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:46.218 "is_configured": true, 00:29:46.218 "data_offset": 0, 00:29:46.218 "data_size": 65536 00:29:46.218 }, 00:29:46.218 { 00:29:46.218 "name": "BaseBdev2", 00:29:46.218 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:46.218 "is_configured": true, 00:29:46.218 "data_offset": 0, 00:29:46.218 "data_size": 65536 00:29:46.218 }, 00:29:46.218 { 00:29:46.218 "name": "BaseBdev3", 00:29:46.218 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:46.218 "is_configured": true, 00:29:46.218 "data_offset": 0, 00:29:46.218 "data_size": 65536 00:29:46.218 } 00:29:46.218 ] 00:29:46.218 }' 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.218 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.477 [2024-11-20 07:27:10.522949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.477 [2024-11-20 07:27:10.584328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:46.477 [2024-11-20 07:27:10.584446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.477 [2024-11-20 07:27:10.584474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.477 [2024-11-20 07:27:10.584485] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.477 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.478 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.478 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.478 "name": "raid_bdev1", 00:29:46.478 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:46.478 "strip_size_kb": 64, 00:29:46.478 "state": "online", 00:29:46.478 "raid_level": "raid5f", 00:29:46.478 "superblock": false, 00:29:46.478 "num_base_bdevs": 3, 00:29:46.478 "num_base_bdevs_discovered": 2, 00:29:46.478 "num_base_bdevs_operational": 2, 00:29:46.478 "base_bdevs_list": [ 00:29:46.478 { 00:29:46.478 "name": null, 00:29:46.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.478 "is_configured": false, 00:29:46.478 "data_offset": 0, 00:29:46.478 "data_size": 65536 00:29:46.478 }, 00:29:46.478 { 00:29:46.478 "name": "BaseBdev2", 00:29:46.478 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:46.478 "is_configured": true, 00:29:46.478 "data_offset": 0, 00:29:46.478 "data_size": 65536 00:29:46.478 }, 00:29:46.478 { 00:29:46.478 "name": "BaseBdev3", 00:29:46.478 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:46.478 "is_configured": true, 00:29:46.478 "data_offset": 0, 00:29:46.478 "data_size": 65536 00:29:46.478 } 00:29:46.478 ] 00:29:46.478 }' 00:29:46.478 07:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.478 07:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.046 07:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:47.047 "name": "raid_bdev1", 00:29:47.047 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:47.047 "strip_size_kb": 64, 00:29:47.047 "state": "online", 00:29:47.047 "raid_level": "raid5f", 00:29:47.047 "superblock": false, 00:29:47.047 "num_base_bdevs": 3, 00:29:47.047 "num_base_bdevs_discovered": 2, 00:29:47.047 "num_base_bdevs_operational": 2, 00:29:47.047 "base_bdevs_list": [ 00:29:47.047 { 00:29:47.047 "name": null, 00:29:47.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.047 "is_configured": false, 00:29:47.047 "data_offset": 0, 00:29:47.047 "data_size": 65536 00:29:47.047 }, 00:29:47.047 { 00:29:47.047 "name": "BaseBdev2", 00:29:47.047 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:47.047 "is_configured": true, 00:29:47.047 "data_offset": 0, 00:29:47.047 "data_size": 65536 00:29:47.047 }, 00:29:47.047 { 00:29:47.047 "name": "BaseBdev3", 00:29:47.047 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:47.047 "is_configured": true, 00:29:47.047 "data_offset": 0, 00:29:47.047 "data_size": 65536 00:29:47.047 } 00:29:47.047 ] 00:29:47.047 }' 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.047 [2024-11-20 07:27:11.275762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:47.047 [2024-11-20 07:27:11.289492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.047 07:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:47.047 [2024-11-20 07:27:11.296761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:48.433 "name": "raid_bdev1", 00:29:48.433 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:48.433 "strip_size_kb": 64, 00:29:48.433 "state": "online", 00:29:48.433 "raid_level": "raid5f", 00:29:48.433 "superblock": false, 00:29:48.433 "num_base_bdevs": 3, 00:29:48.433 "num_base_bdevs_discovered": 3, 00:29:48.433 "num_base_bdevs_operational": 3, 00:29:48.433 "process": { 00:29:48.433 "type": "rebuild", 00:29:48.433 "target": "spare", 00:29:48.433 "progress": { 00:29:48.433 "blocks": 18432, 00:29:48.433 "percent": 14 00:29:48.433 } 00:29:48.433 }, 00:29:48.433 "base_bdevs_list": [ 00:29:48.433 { 00:29:48.433 "name": "spare", 00:29:48.433 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:48.433 "is_configured": true, 00:29:48.433 "data_offset": 0, 00:29:48.433 "data_size": 65536 00:29:48.433 }, 00:29:48.433 { 00:29:48.433 "name": "BaseBdev2", 00:29:48.433 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:48.433 "is_configured": true, 00:29:48.433 "data_offset": 0, 00:29:48.433 "data_size": 65536 00:29:48.433 }, 00:29:48.433 { 00:29:48.433 "name": "BaseBdev3", 00:29:48.433 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:48.433 "is_configured": true, 00:29:48.433 "data_offset": 0, 00:29:48.433 "data_size": 65536 00:29:48.433 } 00:29:48.433 ] 00:29:48.433 }' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=596 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.433 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:48.433 "name": "raid_bdev1", 00:29:48.433 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:48.433 "strip_size_kb": 64, 00:29:48.434 "state": "online", 00:29:48.434 "raid_level": "raid5f", 00:29:48.434 "superblock": false, 00:29:48.434 "num_base_bdevs": 3, 00:29:48.434 "num_base_bdevs_discovered": 3, 00:29:48.434 "num_base_bdevs_operational": 3, 00:29:48.434 "process": { 00:29:48.434 "type": "rebuild", 00:29:48.434 "target": "spare", 00:29:48.434 "progress": { 00:29:48.434 "blocks": 22528, 00:29:48.434 "percent": 17 00:29:48.434 } 00:29:48.434 }, 00:29:48.434 "base_bdevs_list": [ 00:29:48.434 { 00:29:48.434 "name": "spare", 00:29:48.434 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:48.434 "is_configured": true, 00:29:48.434 "data_offset": 0, 00:29:48.434 "data_size": 65536 00:29:48.434 }, 00:29:48.434 { 00:29:48.434 "name": "BaseBdev2", 00:29:48.434 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:48.434 "is_configured": true, 00:29:48.434 "data_offset": 0, 00:29:48.434 "data_size": 65536 00:29:48.434 }, 00:29:48.434 { 00:29:48.434 "name": "BaseBdev3", 00:29:48.434 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:48.434 "is_configured": true, 00:29:48.434 "data_offset": 0, 00:29:48.434 "data_size": 65536 00:29:48.434 } 00:29:48.434 ] 00:29:48.434 }' 00:29:48.434 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:48.434 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.434 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:48.434 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.434 07:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.366 07:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.624 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:49.624 "name": "raid_bdev1", 00:29:49.624 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:49.624 "strip_size_kb": 64, 00:29:49.624 "state": "online", 00:29:49.624 "raid_level": "raid5f", 00:29:49.624 "superblock": false, 00:29:49.624 "num_base_bdevs": 3, 00:29:49.624 "num_base_bdevs_discovered": 3, 00:29:49.624 "num_base_bdevs_operational": 3, 00:29:49.624 "process": { 00:29:49.624 "type": "rebuild", 00:29:49.624 "target": "spare", 00:29:49.624 "progress": { 00:29:49.624 "blocks": 45056, 00:29:49.624 "percent": 34 00:29:49.624 } 00:29:49.624 }, 00:29:49.624 "base_bdevs_list": [ 00:29:49.624 { 00:29:49.624 "name": "spare", 00:29:49.624 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:49.624 "is_configured": true, 00:29:49.624 "data_offset": 0, 00:29:49.624 "data_size": 65536 00:29:49.624 }, 00:29:49.624 { 00:29:49.624 "name": "BaseBdev2", 00:29:49.624 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:49.624 "is_configured": true, 00:29:49.624 "data_offset": 0, 00:29:49.624 "data_size": 65536 00:29:49.624 }, 00:29:49.624 { 00:29:49.624 "name": "BaseBdev3", 00:29:49.624 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:49.624 "is_configured": true, 00:29:49.624 "data_offset": 0, 00:29:49.624 "data_size": 65536 00:29:49.624 } 00:29:49.624 ] 00:29:49.624 }' 00:29:49.624 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:49.624 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.624 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:49.624 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.624 07:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.560 07:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.561 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:50.561 "name": "raid_bdev1", 00:29:50.561 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:50.561 "strip_size_kb": 64, 00:29:50.561 "state": "online", 00:29:50.561 "raid_level": "raid5f", 00:29:50.561 "superblock": false, 00:29:50.561 "num_base_bdevs": 3, 00:29:50.561 "num_base_bdevs_discovered": 3, 00:29:50.561 "num_base_bdevs_operational": 3, 00:29:50.561 "process": { 00:29:50.561 "type": "rebuild", 00:29:50.561 "target": "spare", 00:29:50.561 "progress": { 00:29:50.561 "blocks": 69632, 00:29:50.561 "percent": 53 00:29:50.561 } 00:29:50.561 }, 00:29:50.561 "base_bdevs_list": [ 00:29:50.561 { 00:29:50.561 "name": "spare", 00:29:50.561 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:50.561 "is_configured": true, 00:29:50.561 "data_offset": 0, 00:29:50.561 "data_size": 65536 00:29:50.561 }, 00:29:50.561 { 00:29:50.561 "name": "BaseBdev2", 00:29:50.561 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:50.561 "is_configured": true, 00:29:50.561 "data_offset": 0, 00:29:50.561 "data_size": 65536 00:29:50.561 }, 00:29:50.561 { 00:29:50.561 "name": "BaseBdev3", 00:29:50.561 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:50.561 "is_configured": true, 00:29:50.561 "data_offset": 0, 00:29:50.561 "data_size": 65536 00:29:50.561 } 00:29:50.561 ] 00:29:50.561 }' 00:29:50.561 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:50.820 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:50.820 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:50.820 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:50.820 07:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:51.755 "name": "raid_bdev1", 00:29:51.755 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:51.755 "strip_size_kb": 64, 00:29:51.755 "state": "online", 00:29:51.755 "raid_level": "raid5f", 00:29:51.755 "superblock": false, 00:29:51.755 "num_base_bdevs": 3, 00:29:51.755 "num_base_bdevs_discovered": 3, 00:29:51.755 "num_base_bdevs_operational": 3, 00:29:51.755 "process": { 00:29:51.755 "type": "rebuild", 00:29:51.755 "target": "spare", 00:29:51.755 "progress": { 00:29:51.755 "blocks": 92160, 00:29:51.755 "percent": 70 00:29:51.755 } 00:29:51.755 }, 00:29:51.755 "base_bdevs_list": [ 00:29:51.755 { 00:29:51.755 "name": "spare", 00:29:51.755 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:51.755 "is_configured": true, 00:29:51.755 "data_offset": 0, 00:29:51.755 "data_size": 65536 00:29:51.755 }, 00:29:51.755 { 00:29:51.755 "name": "BaseBdev2", 00:29:51.755 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:51.755 "is_configured": true, 00:29:51.755 "data_offset": 0, 00:29:51.755 "data_size": 65536 00:29:51.755 }, 00:29:51.755 { 00:29:51.755 "name": "BaseBdev3", 00:29:51.755 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:51.755 "is_configured": true, 00:29:51.755 "data_offset": 0, 00:29:51.755 "data_size": 65536 00:29:51.755 } 00:29:51.755 ] 00:29:51.755 }' 00:29:51.755 07:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:51.755 07:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:51.755 07:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:52.014 07:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:52.014 07:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:52.951 "name": "raid_bdev1", 00:29:52.951 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:52.951 "strip_size_kb": 64, 00:29:52.951 "state": "online", 00:29:52.951 "raid_level": "raid5f", 00:29:52.951 "superblock": false, 00:29:52.951 "num_base_bdevs": 3, 00:29:52.951 "num_base_bdevs_discovered": 3, 00:29:52.951 "num_base_bdevs_operational": 3, 00:29:52.951 "process": { 00:29:52.951 "type": "rebuild", 00:29:52.951 "target": "spare", 00:29:52.951 "progress": { 00:29:52.951 "blocks": 116736, 00:29:52.951 "percent": 89 00:29:52.951 } 00:29:52.951 }, 00:29:52.951 "base_bdevs_list": [ 00:29:52.951 { 00:29:52.951 "name": "spare", 00:29:52.951 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:52.951 "is_configured": true, 00:29:52.951 "data_offset": 0, 00:29:52.951 "data_size": 65536 00:29:52.951 }, 00:29:52.951 { 00:29:52.951 "name": "BaseBdev2", 00:29:52.951 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:52.951 "is_configured": true, 00:29:52.951 "data_offset": 0, 00:29:52.951 "data_size": 65536 00:29:52.951 }, 00:29:52.951 { 00:29:52.951 "name": "BaseBdev3", 00:29:52.951 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:52.951 "is_configured": true, 00:29:52.951 "data_offset": 0, 00:29:52.951 "data_size": 65536 00:29:52.951 } 00:29:52.951 ] 00:29:52.951 }' 00:29:52.951 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:52.952 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:52.952 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:53.211 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:53.211 07:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:53.778 [2024-11-20 07:27:17.767706] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:53.778 [2024-11-20 07:27:17.767808] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:53.778 [2024-11-20 07:27:17.767879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.037 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:54.037 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:54.037 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:54.038 "name": "raid_bdev1", 00:29:54.038 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:54.038 "strip_size_kb": 64, 00:29:54.038 "state": "online", 00:29:54.038 "raid_level": "raid5f", 00:29:54.038 "superblock": false, 00:29:54.038 "num_base_bdevs": 3, 00:29:54.038 "num_base_bdevs_discovered": 3, 00:29:54.038 "num_base_bdevs_operational": 3, 00:29:54.038 "base_bdevs_list": [ 00:29:54.038 { 00:29:54.038 "name": "spare", 00:29:54.038 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:54.038 "is_configured": true, 00:29:54.038 "data_offset": 0, 00:29:54.038 "data_size": 65536 00:29:54.038 }, 00:29:54.038 { 00:29:54.038 "name": "BaseBdev2", 00:29:54.038 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:54.038 "is_configured": true, 00:29:54.038 "data_offset": 0, 00:29:54.038 "data_size": 65536 00:29:54.038 }, 00:29:54.038 { 00:29:54.038 "name": "BaseBdev3", 00:29:54.038 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:54.038 "is_configured": true, 00:29:54.038 "data_offset": 0, 00:29:54.038 "data_size": 65536 00:29:54.038 } 00:29:54.038 ] 00:29:54.038 }' 00:29:54.038 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:54.297 "name": "raid_bdev1", 00:29:54.297 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:54.297 "strip_size_kb": 64, 00:29:54.297 "state": "online", 00:29:54.297 "raid_level": "raid5f", 00:29:54.297 "superblock": false, 00:29:54.297 "num_base_bdevs": 3, 00:29:54.297 "num_base_bdevs_discovered": 3, 00:29:54.297 "num_base_bdevs_operational": 3, 00:29:54.297 "base_bdevs_list": [ 00:29:54.297 { 00:29:54.297 "name": "spare", 00:29:54.297 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:54.297 "is_configured": true, 00:29:54.297 "data_offset": 0, 00:29:54.297 "data_size": 65536 00:29:54.297 }, 00:29:54.297 { 00:29:54.297 "name": "BaseBdev2", 00:29:54.297 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:54.297 "is_configured": true, 00:29:54.297 "data_offset": 0, 00:29:54.297 "data_size": 65536 00:29:54.297 }, 00:29:54.297 { 00:29:54.297 "name": "BaseBdev3", 00:29:54.297 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:54.297 "is_configured": true, 00:29:54.297 "data_offset": 0, 00:29:54.297 "data_size": 65536 00:29:54.297 } 00:29:54.297 ] 00:29:54.297 }' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.297 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.556 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.556 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:54.556 "name": "raid_bdev1", 00:29:54.556 "uuid": "b95466ba-9b1b-4b18-8c7a-ed88c553a0b6", 00:29:54.556 "strip_size_kb": 64, 00:29:54.556 "state": "online", 00:29:54.556 "raid_level": "raid5f", 00:29:54.556 "superblock": false, 00:29:54.556 "num_base_bdevs": 3, 00:29:54.556 "num_base_bdevs_discovered": 3, 00:29:54.556 "num_base_bdevs_operational": 3, 00:29:54.556 "base_bdevs_list": [ 00:29:54.556 { 00:29:54.556 "name": "spare", 00:29:54.556 "uuid": "b4619950-2f1d-5844-ab18-92e4d1661918", 00:29:54.556 "is_configured": true, 00:29:54.556 "data_offset": 0, 00:29:54.556 "data_size": 65536 00:29:54.556 }, 00:29:54.556 { 00:29:54.556 "name": "BaseBdev2", 00:29:54.556 "uuid": "6554fe2f-08ac-56cd-a19b-24fd95aed94d", 00:29:54.556 "is_configured": true, 00:29:54.556 "data_offset": 0, 00:29:54.556 "data_size": 65536 00:29:54.556 }, 00:29:54.556 { 00:29:54.556 "name": "BaseBdev3", 00:29:54.556 "uuid": "5664f91f-7ad3-5ff0-8ce0-6853c2bb5d8c", 00:29:54.556 "is_configured": true, 00:29:54.556 "data_offset": 0, 00:29:54.556 "data_size": 65536 00:29:54.556 } 00:29:54.556 ] 00:29:54.556 }' 00:29:54.556 07:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:54.556 07:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.122 [2024-11-20 07:27:19.125890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:55.122 [2024-11-20 07:27:19.125939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:55.122 [2024-11-20 07:27:19.126074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:55.122 [2024-11-20 07:27:19.126187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:55.122 [2024-11-20 07:27:19.126211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.122 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:55.380 /dev/nbd0 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.380 1+0 records in 00:29:55.380 1+0 records out 00:29:55.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636428 s, 6.4 MB/s 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:55.380 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.381 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.381 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:55.639 /dev/nbd1 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.639 1+0 records in 00:29:55.639 1+0 records out 00:29:55.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381025 s, 10.7 MB/s 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.639 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:55.640 07:27:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:55.640 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.640 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.640 07:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.899 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.159 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82140 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82140 ']' 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82140 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82140 00:29:56.419 killing process with pid 82140 00:29:56.419 Received shutdown signal, test time was about 60.000000 seconds 00:29:56.419 00:29:56.419 Latency(us) 00:29:56.419 [2024-11-20T07:27:20.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.419 [2024-11-20T07:27:20.708Z] =================================================================================================================== 00:29:56.419 [2024-11-20T07:27:20.708Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82140' 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82140 00:29:56.419 [2024-11-20 07:27:20.702015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:56.419 07:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82140 00:29:56.987 [2024-11-20 07:27:21.007357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:29:57.926 00:29:57.926 real 0m16.130s 00:29:57.926 user 0m20.618s 00:29:57.926 sys 0m2.084s 00:29:57.926 ************************************ 00:29:57.926 END TEST raid5f_rebuild_test 00:29:57.926 ************************************ 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.926 07:27:21 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:29:57.926 07:27:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:57.926 07:27:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.926 07:27:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:57.926 ************************************ 00:29:57.926 START TEST raid5f_rebuild_test_sb 00:29:57.926 ************************************ 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82580 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82580 00:29:57.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82580 ']' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.926 07:27:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:57.926 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:57.926 Zero copy mechanism will not be used. 00:29:57.926 [2024-11-20 07:27:22.101498] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:57.926 [2024-11-20 07:27:22.101739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82580 ] 00:29:58.184 [2024-11-20 07:27:22.284856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.184 [2024-11-20 07:27:22.401119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.442 [2024-11-20 07:27:22.592568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:58.442 [2024-11-20 07:27:22.592640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 BaseBdev1_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 [2024-11-20 07:27:23.123486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:59.010 [2024-11-20 07:27:23.123575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.010 [2024-11-20 07:27:23.123654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:59.010 [2024-11-20 07:27:23.123676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.010 [2024-11-20 07:27:23.126379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.010 [2024-11-20 07:27:23.126622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:59.010 BaseBdev1 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 BaseBdev2_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 [2024-11-20 07:27:23.175759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:59.010 [2024-11-20 07:27:23.175837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.010 [2024-11-20 07:27:23.175862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:59.010 [2024-11-20 07:27:23.175880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.010 [2024-11-20 07:27:23.178416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.010 [2024-11-20 07:27:23.178477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:59.010 BaseBdev2 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 BaseBdev3_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 [2024-11-20 07:27:23.242266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:59.010 [2024-11-20 07:27:23.242406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.010 [2024-11-20 07:27:23.242436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:59.010 [2024-11-20 07:27:23.242454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.010 [2024-11-20 07:27:23.245682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.010 [2024-11-20 07:27:23.245777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:59.010 BaseBdev3 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 spare_malloc 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.010 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.269 spare_delay 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.269 [2024-11-20 07:27:23.305684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:59.269 [2024-11-20 07:27:23.305773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.269 [2024-11-20 07:27:23.305800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:59.269 [2024-11-20 07:27:23.305818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.269 [2024-11-20 07:27:23.308496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.269 [2024-11-20 07:27:23.308746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:59.269 spare 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.269 [2024-11-20 07:27:23.313750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:59.269 [2024-11-20 07:27:23.315996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:59.269 [2024-11-20 07:27:23.316068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:59.269 [2024-11-20 07:27:23.316265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:59.269 [2024-11-20 07:27:23.316283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:59.269 [2024-11-20 07:27:23.316544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:59.269 [2024-11-20 07:27:23.321335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:59.269 [2024-11-20 07:27:23.321503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:59.269 [2024-11-20 07:27:23.321881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.269 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.270 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.270 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.270 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:59.270 "name": "raid_bdev1", 00:29:59.270 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:29:59.270 "strip_size_kb": 64, 00:29:59.270 "state": "online", 00:29:59.270 "raid_level": "raid5f", 00:29:59.270 "superblock": true, 00:29:59.270 "num_base_bdevs": 3, 00:29:59.270 "num_base_bdevs_discovered": 3, 00:29:59.270 "num_base_bdevs_operational": 3, 00:29:59.270 "base_bdevs_list": [ 00:29:59.270 { 00:29:59.270 "name": "BaseBdev1", 00:29:59.270 "uuid": "d158f364-1c87-5d1c-93df-c9bb162dba47", 00:29:59.270 "is_configured": true, 00:29:59.270 "data_offset": 2048, 00:29:59.270 "data_size": 63488 00:29:59.270 }, 00:29:59.270 { 00:29:59.270 "name": "BaseBdev2", 00:29:59.270 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:29:59.270 "is_configured": true, 00:29:59.270 "data_offset": 2048, 00:29:59.270 "data_size": 63488 00:29:59.270 }, 00:29:59.270 { 00:29:59.270 "name": "BaseBdev3", 00:29:59.270 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:29:59.270 "is_configured": true, 00:29:59.270 "data_offset": 2048, 00:29:59.270 "data_size": 63488 00:29:59.270 } 00:29:59.270 ] 00:29:59.270 }' 00:29:59.270 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:59.270 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:59.837 [2024-11-20 07:27:23.848216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:59.837 07:27:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:00.096 [2024-11-20 07:27:24.224100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:00.096 /dev/nbd0 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.096 1+0 records in 00:30:00.096 1+0 records out 00:30:00.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047761 s, 8.6 MB/s 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:30:00.096 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:30:00.663 496+0 records in 00:30:00.663 496+0 records out 00:30:00.663 65011712 bytes (65 MB, 62 MiB) copied, 0.42513 s, 153 MB/s 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.663 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:00.975 [2024-11-20 07:27:24.957790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.975 [2024-11-20 07:27:24.987597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.975 07:27:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.975 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.975 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.975 "name": "raid_bdev1", 00:30:00.975 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:00.975 "strip_size_kb": 64, 00:30:00.975 "state": "online", 00:30:00.975 "raid_level": "raid5f", 00:30:00.975 "superblock": true, 00:30:00.975 "num_base_bdevs": 3, 00:30:00.975 "num_base_bdevs_discovered": 2, 00:30:00.975 "num_base_bdevs_operational": 2, 00:30:00.975 "base_bdevs_list": [ 00:30:00.975 { 00:30:00.975 "name": null, 00:30:00.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.975 "is_configured": false, 00:30:00.975 "data_offset": 0, 00:30:00.975 "data_size": 63488 00:30:00.975 }, 00:30:00.975 { 00:30:00.975 "name": "BaseBdev2", 00:30:00.975 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:00.975 "is_configured": true, 00:30:00.975 "data_offset": 2048, 00:30:00.975 "data_size": 63488 00:30:00.975 }, 00:30:00.975 { 00:30:00.975 "name": "BaseBdev3", 00:30:00.975 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:00.975 "is_configured": true, 00:30:00.975 "data_offset": 2048, 00:30:00.975 "data_size": 63488 00:30:00.975 } 00:30:00.975 ] 00:30:00.975 }' 00:30:00.975 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.975 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.248 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:01.248 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.248 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.248 [2024-11-20 07:27:25.487829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:01.248 [2024-11-20 07:27:25.502378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:30:01.248 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.248 07:27:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:01.248 [2024-11-20 07:27:25.509354] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:02.625 "name": "raid_bdev1", 00:30:02.625 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:02.625 "strip_size_kb": 64, 00:30:02.625 "state": "online", 00:30:02.625 "raid_level": "raid5f", 00:30:02.625 "superblock": true, 00:30:02.625 "num_base_bdevs": 3, 00:30:02.625 "num_base_bdevs_discovered": 3, 00:30:02.625 "num_base_bdevs_operational": 3, 00:30:02.625 "process": { 00:30:02.625 "type": "rebuild", 00:30:02.625 "target": "spare", 00:30:02.625 "progress": { 00:30:02.625 "blocks": 18432, 00:30:02.625 "percent": 14 00:30:02.625 } 00:30:02.625 }, 00:30:02.625 "base_bdevs_list": [ 00:30:02.625 { 00:30:02.625 "name": "spare", 00:30:02.625 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:02.625 "is_configured": true, 00:30:02.625 "data_offset": 2048, 00:30:02.625 "data_size": 63488 00:30:02.625 }, 00:30:02.625 { 00:30:02.625 "name": "BaseBdev2", 00:30:02.625 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:02.625 "is_configured": true, 00:30:02.625 "data_offset": 2048, 00:30:02.625 "data_size": 63488 00:30:02.625 }, 00:30:02.625 { 00:30:02.625 "name": "BaseBdev3", 00:30:02.625 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:02.625 "is_configured": true, 00:30:02.625 "data_offset": 2048, 00:30:02.625 "data_size": 63488 00:30:02.625 } 00:30:02.625 ] 00:30:02.625 }' 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.625 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.625 [2024-11-20 07:27:26.679065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:02.626 [2024-11-20 07:27:26.721508] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:02.626 [2024-11-20 07:27:26.721587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.626 [2024-11-20 07:27:26.721833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:02.626 [2024-11-20 07:27:26.721853] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.626 "name": "raid_bdev1", 00:30:02.626 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:02.626 "strip_size_kb": 64, 00:30:02.626 "state": "online", 00:30:02.626 "raid_level": "raid5f", 00:30:02.626 "superblock": true, 00:30:02.626 "num_base_bdevs": 3, 00:30:02.626 "num_base_bdevs_discovered": 2, 00:30:02.626 "num_base_bdevs_operational": 2, 00:30:02.626 "base_bdevs_list": [ 00:30:02.626 { 00:30:02.626 "name": null, 00:30:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.626 "is_configured": false, 00:30:02.626 "data_offset": 0, 00:30:02.626 "data_size": 63488 00:30:02.626 }, 00:30:02.626 { 00:30:02.626 "name": "BaseBdev2", 00:30:02.626 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:02.626 "is_configured": true, 00:30:02.626 "data_offset": 2048, 00:30:02.626 "data_size": 63488 00:30:02.626 }, 00:30:02.626 { 00:30:02.626 "name": "BaseBdev3", 00:30:02.626 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:02.626 "is_configured": true, 00:30:02.626 "data_offset": 2048, 00:30:02.626 "data_size": 63488 00:30:02.626 } 00:30:02.626 ] 00:30:02.626 }' 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.626 07:27:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:03.193 "name": "raid_bdev1", 00:30:03.193 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:03.193 "strip_size_kb": 64, 00:30:03.193 "state": "online", 00:30:03.193 "raid_level": "raid5f", 00:30:03.193 "superblock": true, 00:30:03.193 "num_base_bdevs": 3, 00:30:03.193 "num_base_bdevs_discovered": 2, 00:30:03.193 "num_base_bdevs_operational": 2, 00:30:03.193 "base_bdevs_list": [ 00:30:03.193 { 00:30:03.193 "name": null, 00:30:03.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.193 "is_configured": false, 00:30:03.193 "data_offset": 0, 00:30:03.193 "data_size": 63488 00:30:03.193 }, 00:30:03.193 { 00:30:03.193 "name": "BaseBdev2", 00:30:03.193 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:03.193 "is_configured": true, 00:30:03.193 "data_offset": 2048, 00:30:03.193 "data_size": 63488 00:30:03.193 }, 00:30:03.193 { 00:30:03.193 "name": "BaseBdev3", 00:30:03.193 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:03.193 "is_configured": true, 00:30:03.193 "data_offset": 2048, 00:30:03.193 "data_size": 63488 00:30:03.193 } 00:30:03.193 ] 00:30:03.193 }' 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.193 [2024-11-20 07:27:27.432726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:03.193 [2024-11-20 07:27:27.447190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.193 07:27:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:03.193 [2024-11-20 07:27:27.454572] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:04.571 "name": "raid_bdev1", 00:30:04.571 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:04.571 "strip_size_kb": 64, 00:30:04.571 "state": "online", 00:30:04.571 "raid_level": "raid5f", 00:30:04.571 "superblock": true, 00:30:04.571 "num_base_bdevs": 3, 00:30:04.571 "num_base_bdevs_discovered": 3, 00:30:04.571 "num_base_bdevs_operational": 3, 00:30:04.571 "process": { 00:30:04.571 "type": "rebuild", 00:30:04.571 "target": "spare", 00:30:04.571 "progress": { 00:30:04.571 "blocks": 18432, 00:30:04.571 "percent": 14 00:30:04.571 } 00:30:04.571 }, 00:30:04.571 "base_bdevs_list": [ 00:30:04.571 { 00:30:04.571 "name": "spare", 00:30:04.571 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:04.571 "is_configured": true, 00:30:04.571 "data_offset": 2048, 00:30:04.571 "data_size": 63488 00:30:04.571 }, 00:30:04.571 { 00:30:04.571 "name": "BaseBdev2", 00:30:04.571 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:04.571 "is_configured": true, 00:30:04.571 "data_offset": 2048, 00:30:04.571 "data_size": 63488 00:30:04.571 }, 00:30:04.571 { 00:30:04.571 "name": "BaseBdev3", 00:30:04.571 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:04.571 "is_configured": true, 00:30:04.571 "data_offset": 2048, 00:30:04.571 "data_size": 63488 00:30:04.571 } 00:30:04.571 ] 00:30:04.571 }' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:04.571 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=612 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.571 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:04.571 "name": "raid_bdev1", 00:30:04.571 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:04.571 "strip_size_kb": 64, 00:30:04.571 "state": "online", 00:30:04.571 "raid_level": "raid5f", 00:30:04.571 "superblock": true, 00:30:04.571 "num_base_bdevs": 3, 00:30:04.571 "num_base_bdevs_discovered": 3, 00:30:04.571 "num_base_bdevs_operational": 3, 00:30:04.571 "process": { 00:30:04.571 "type": "rebuild", 00:30:04.571 "target": "spare", 00:30:04.571 "progress": { 00:30:04.571 "blocks": 22528, 00:30:04.571 "percent": 17 00:30:04.571 } 00:30:04.571 }, 00:30:04.571 "base_bdevs_list": [ 00:30:04.571 { 00:30:04.571 "name": "spare", 00:30:04.571 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:04.571 "is_configured": true, 00:30:04.571 "data_offset": 2048, 00:30:04.571 "data_size": 63488 00:30:04.571 }, 00:30:04.571 { 00:30:04.571 "name": "BaseBdev2", 00:30:04.571 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:04.571 "is_configured": true, 00:30:04.572 "data_offset": 2048, 00:30:04.572 "data_size": 63488 00:30:04.572 }, 00:30:04.572 { 00:30:04.572 "name": "BaseBdev3", 00:30:04.572 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:04.572 "is_configured": true, 00:30:04.572 "data_offset": 2048, 00:30:04.572 "data_size": 63488 00:30:04.572 } 00:30:04.572 ] 00:30:04.572 }' 00:30:04.572 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:04.572 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:04.572 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:04.572 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:04.572 07:27:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:05.506 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.507 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.766 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:05.766 "name": "raid_bdev1", 00:30:05.766 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:05.766 "strip_size_kb": 64, 00:30:05.766 "state": "online", 00:30:05.766 "raid_level": "raid5f", 00:30:05.766 "superblock": true, 00:30:05.766 "num_base_bdevs": 3, 00:30:05.766 "num_base_bdevs_discovered": 3, 00:30:05.766 "num_base_bdevs_operational": 3, 00:30:05.766 "process": { 00:30:05.766 "type": "rebuild", 00:30:05.766 "target": "spare", 00:30:05.766 "progress": { 00:30:05.766 "blocks": 47104, 00:30:05.766 "percent": 37 00:30:05.766 } 00:30:05.766 }, 00:30:05.766 "base_bdevs_list": [ 00:30:05.766 { 00:30:05.766 "name": "spare", 00:30:05.766 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:05.766 "is_configured": true, 00:30:05.766 "data_offset": 2048, 00:30:05.766 "data_size": 63488 00:30:05.766 }, 00:30:05.766 { 00:30:05.766 "name": "BaseBdev2", 00:30:05.766 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:05.766 "is_configured": true, 00:30:05.766 "data_offset": 2048, 00:30:05.766 "data_size": 63488 00:30:05.766 }, 00:30:05.766 { 00:30:05.766 "name": "BaseBdev3", 00:30:05.766 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:05.766 "is_configured": true, 00:30:05.766 "data_offset": 2048, 00:30:05.766 "data_size": 63488 00:30:05.766 } 00:30:05.766 ] 00:30:05.766 }' 00:30:05.766 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:05.766 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.766 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:05.766 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.766 07:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.702 07:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.961 07:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:06.961 "name": "raid_bdev1", 00:30:06.961 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:06.961 "strip_size_kb": 64, 00:30:06.961 "state": "online", 00:30:06.961 "raid_level": "raid5f", 00:30:06.961 "superblock": true, 00:30:06.961 "num_base_bdevs": 3, 00:30:06.961 "num_base_bdevs_discovered": 3, 00:30:06.961 "num_base_bdevs_operational": 3, 00:30:06.961 "process": { 00:30:06.961 "type": "rebuild", 00:30:06.961 "target": "spare", 00:30:06.961 "progress": { 00:30:06.961 "blocks": 69632, 00:30:06.961 "percent": 54 00:30:06.961 } 00:30:06.961 }, 00:30:06.961 "base_bdevs_list": [ 00:30:06.961 { 00:30:06.961 "name": "spare", 00:30:06.961 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:06.961 "is_configured": true, 00:30:06.961 "data_offset": 2048, 00:30:06.961 "data_size": 63488 00:30:06.961 }, 00:30:06.961 { 00:30:06.961 "name": "BaseBdev2", 00:30:06.961 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:06.961 "is_configured": true, 00:30:06.961 "data_offset": 2048, 00:30:06.961 "data_size": 63488 00:30:06.961 }, 00:30:06.961 { 00:30:06.961 "name": "BaseBdev3", 00:30:06.961 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:06.961 "is_configured": true, 00:30:06.961 "data_offset": 2048, 00:30:06.961 "data_size": 63488 00:30:06.961 } 00:30:06.961 ] 00:30:06.961 }' 00:30:06.961 07:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:06.961 07:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.961 07:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:06.961 07:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.961 07:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:07.898 "name": "raid_bdev1", 00:30:07.898 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:07.898 "strip_size_kb": 64, 00:30:07.898 "state": "online", 00:30:07.898 "raid_level": "raid5f", 00:30:07.898 "superblock": true, 00:30:07.898 "num_base_bdevs": 3, 00:30:07.898 "num_base_bdevs_discovered": 3, 00:30:07.898 "num_base_bdevs_operational": 3, 00:30:07.898 "process": { 00:30:07.898 "type": "rebuild", 00:30:07.898 "target": "spare", 00:30:07.898 "progress": { 00:30:07.898 "blocks": 94208, 00:30:07.898 "percent": 74 00:30:07.898 } 00:30:07.898 }, 00:30:07.898 "base_bdevs_list": [ 00:30:07.898 { 00:30:07.898 "name": "spare", 00:30:07.898 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:07.898 "is_configured": true, 00:30:07.898 "data_offset": 2048, 00:30:07.898 "data_size": 63488 00:30:07.898 }, 00:30:07.898 { 00:30:07.898 "name": "BaseBdev2", 00:30:07.898 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:07.898 "is_configured": true, 00:30:07.898 "data_offset": 2048, 00:30:07.898 "data_size": 63488 00:30:07.898 }, 00:30:07.898 { 00:30:07.898 "name": "BaseBdev3", 00:30:07.898 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:07.898 "is_configured": true, 00:30:07.898 "data_offset": 2048, 00:30:07.898 "data_size": 63488 00:30:07.898 } 00:30:07.898 ] 00:30:07.898 }' 00:30:07.898 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:08.207 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:08.207 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:08.207 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.207 07:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:09.144 "name": "raid_bdev1", 00:30:09.144 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:09.144 "strip_size_kb": 64, 00:30:09.144 "state": "online", 00:30:09.144 "raid_level": "raid5f", 00:30:09.144 "superblock": true, 00:30:09.144 "num_base_bdevs": 3, 00:30:09.144 "num_base_bdevs_discovered": 3, 00:30:09.144 "num_base_bdevs_operational": 3, 00:30:09.144 "process": { 00:30:09.144 "type": "rebuild", 00:30:09.144 "target": "spare", 00:30:09.144 "progress": { 00:30:09.144 "blocks": 116736, 00:30:09.144 "percent": 91 00:30:09.144 } 00:30:09.144 }, 00:30:09.144 "base_bdevs_list": [ 00:30:09.144 { 00:30:09.144 "name": "spare", 00:30:09.144 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:09.144 "is_configured": true, 00:30:09.144 "data_offset": 2048, 00:30:09.144 "data_size": 63488 00:30:09.144 }, 00:30:09.144 { 00:30:09.144 "name": "BaseBdev2", 00:30:09.144 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:09.144 "is_configured": true, 00:30:09.144 "data_offset": 2048, 00:30:09.144 "data_size": 63488 00:30:09.144 }, 00:30:09.144 { 00:30:09.144 "name": "BaseBdev3", 00:30:09.144 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:09.144 "is_configured": true, 00:30:09.144 "data_offset": 2048, 00:30:09.144 "data_size": 63488 00:30:09.144 } 00:30:09.144 ] 00:30:09.144 }' 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.144 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:09.403 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.403 07:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:09.662 [2024-11-20 07:27:33.718054] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:09.662 [2024-11-20 07:27:33.718161] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:09.662 [2024-11-20 07:27:33.718328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.274 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:10.274 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.274 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:10.274 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:10.274 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:10.274 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:10.275 "name": "raid_bdev1", 00:30:10.275 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:10.275 "strip_size_kb": 64, 00:30:10.275 "state": "online", 00:30:10.275 "raid_level": "raid5f", 00:30:10.275 "superblock": true, 00:30:10.275 "num_base_bdevs": 3, 00:30:10.275 "num_base_bdevs_discovered": 3, 00:30:10.275 "num_base_bdevs_operational": 3, 00:30:10.275 "base_bdevs_list": [ 00:30:10.275 { 00:30:10.275 "name": "spare", 00:30:10.275 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:10.275 "is_configured": true, 00:30:10.275 "data_offset": 2048, 00:30:10.275 "data_size": 63488 00:30:10.275 }, 00:30:10.275 { 00:30:10.275 "name": "BaseBdev2", 00:30:10.275 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:10.275 "is_configured": true, 00:30:10.275 "data_offset": 2048, 00:30:10.275 "data_size": 63488 00:30:10.275 }, 00:30:10.275 { 00:30:10.275 "name": "BaseBdev3", 00:30:10.275 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:10.275 "is_configured": true, 00:30:10.275 "data_offset": 2048, 00:30:10.275 "data_size": 63488 00:30:10.275 } 00:30:10.275 ] 00:30:10.275 }' 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:10.275 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:10.535 "name": "raid_bdev1", 00:30:10.535 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:10.535 "strip_size_kb": 64, 00:30:10.535 "state": "online", 00:30:10.535 "raid_level": "raid5f", 00:30:10.535 "superblock": true, 00:30:10.535 "num_base_bdevs": 3, 00:30:10.535 "num_base_bdevs_discovered": 3, 00:30:10.535 "num_base_bdevs_operational": 3, 00:30:10.535 "base_bdevs_list": [ 00:30:10.535 { 00:30:10.535 "name": "spare", 00:30:10.535 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:10.535 "is_configured": true, 00:30:10.535 "data_offset": 2048, 00:30:10.535 "data_size": 63488 00:30:10.535 }, 00:30:10.535 { 00:30:10.535 "name": "BaseBdev2", 00:30:10.535 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:10.535 "is_configured": true, 00:30:10.535 "data_offset": 2048, 00:30:10.535 "data_size": 63488 00:30:10.535 }, 00:30:10.535 { 00:30:10.535 "name": "BaseBdev3", 00:30:10.535 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:10.535 "is_configured": true, 00:30:10.535 "data_offset": 2048, 00:30:10.535 "data_size": 63488 00:30:10.535 } 00:30:10.535 ] 00:30:10.535 }' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:10.535 "name": "raid_bdev1", 00:30:10.535 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:10.535 "strip_size_kb": 64, 00:30:10.535 "state": "online", 00:30:10.535 "raid_level": "raid5f", 00:30:10.535 "superblock": true, 00:30:10.535 "num_base_bdevs": 3, 00:30:10.535 "num_base_bdevs_discovered": 3, 00:30:10.535 "num_base_bdevs_operational": 3, 00:30:10.535 "base_bdevs_list": [ 00:30:10.535 { 00:30:10.535 "name": "spare", 00:30:10.535 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:10.535 "is_configured": true, 00:30:10.535 "data_offset": 2048, 00:30:10.535 "data_size": 63488 00:30:10.535 }, 00:30:10.535 { 00:30:10.535 "name": "BaseBdev2", 00:30:10.535 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:10.535 "is_configured": true, 00:30:10.535 "data_offset": 2048, 00:30:10.535 "data_size": 63488 00:30:10.535 }, 00:30:10.535 { 00:30:10.535 "name": "BaseBdev3", 00:30:10.535 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:10.535 "is_configured": true, 00:30:10.535 "data_offset": 2048, 00:30:10.535 "data_size": 63488 00:30:10.535 } 00:30:10.535 ] 00:30:10.535 }' 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:10.535 07:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 [2024-11-20 07:27:35.274301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:11.102 [2024-11-20 07:27:35.274367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:11.102 [2024-11-20 07:27:35.274466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:11.102 [2024-11-20 07:27:35.274577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:11.102 [2024-11-20 07:27:35.274648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.102 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:11.361 /dev/nbd0 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.620 1+0 records in 00:30:11.620 1+0 records out 00:30:11.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260253 s, 15.7 MB/s 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.620 07:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:11.879 /dev/nbd1 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.879 1+0 records in 00:30:11.879 1+0 records out 00:30:11.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394116 s, 10.4 MB/s 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.879 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:12.138 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:12.397 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.656 [2024-11-20 07:27:36.862446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:12.656 [2024-11-20 07:27:36.862535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.656 [2024-11-20 07:27:36.862562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:12.656 [2024-11-20 07:27:36.862579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.656 [2024-11-20 07:27:36.865457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.656 [2024-11-20 07:27:36.865535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:12.656 [2024-11-20 07:27:36.865663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:12.656 [2024-11-20 07:27:36.865769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:12.656 [2024-11-20 07:27:36.865957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:12.656 [2024-11-20 07:27:36.866096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:12.656 spare 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.656 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.917 [2024-11-20 07:27:36.966217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:12.917 [2024-11-20 07:27:36.966265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:12.917 [2024-11-20 07:27:36.966595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:30:12.917 [2024-11-20 07:27:36.970824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:12.917 [2024-11-20 07:27:36.970851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:12.917 [2024-11-20 07:27:36.971114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.917 07:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.917 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.917 "name": "raid_bdev1", 00:30:12.917 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:12.917 "strip_size_kb": 64, 00:30:12.917 "state": "online", 00:30:12.917 "raid_level": "raid5f", 00:30:12.917 "superblock": true, 00:30:12.917 "num_base_bdevs": 3, 00:30:12.917 "num_base_bdevs_discovered": 3, 00:30:12.917 "num_base_bdevs_operational": 3, 00:30:12.917 "base_bdevs_list": [ 00:30:12.917 { 00:30:12.917 "name": "spare", 00:30:12.917 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:12.917 "is_configured": true, 00:30:12.917 "data_offset": 2048, 00:30:12.917 "data_size": 63488 00:30:12.917 }, 00:30:12.917 { 00:30:12.917 "name": "BaseBdev2", 00:30:12.917 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:12.917 "is_configured": true, 00:30:12.918 "data_offset": 2048, 00:30:12.918 "data_size": 63488 00:30:12.918 }, 00:30:12.918 { 00:30:12.918 "name": "BaseBdev3", 00:30:12.918 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:12.918 "is_configured": true, 00:30:12.918 "data_offset": 2048, 00:30:12.918 "data_size": 63488 00:30:12.918 } 00:30:12.918 ] 00:30:12.918 }' 00:30:12.918 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.918 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:13.489 "name": "raid_bdev1", 00:30:13.489 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:13.489 "strip_size_kb": 64, 00:30:13.489 "state": "online", 00:30:13.489 "raid_level": "raid5f", 00:30:13.489 "superblock": true, 00:30:13.489 "num_base_bdevs": 3, 00:30:13.489 "num_base_bdevs_discovered": 3, 00:30:13.489 "num_base_bdevs_operational": 3, 00:30:13.489 "base_bdevs_list": [ 00:30:13.489 { 00:30:13.489 "name": "spare", 00:30:13.489 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:13.489 "is_configured": true, 00:30:13.489 "data_offset": 2048, 00:30:13.489 "data_size": 63488 00:30:13.489 }, 00:30:13.489 { 00:30:13.489 "name": "BaseBdev2", 00:30:13.489 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:13.489 "is_configured": true, 00:30:13.489 "data_offset": 2048, 00:30:13.489 "data_size": 63488 00:30:13.489 }, 00:30:13.489 { 00:30:13.489 "name": "BaseBdev3", 00:30:13.489 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:13.489 "is_configured": true, 00:30:13.489 "data_offset": 2048, 00:30:13.489 "data_size": 63488 00:30:13.489 } 00:30:13.489 ] 00:30:13.489 }' 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.489 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.490 [2024-11-20 07:27:37.708262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.490 "name": "raid_bdev1", 00:30:13.490 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:13.490 "strip_size_kb": 64, 00:30:13.490 "state": "online", 00:30:13.490 "raid_level": "raid5f", 00:30:13.490 "superblock": true, 00:30:13.490 "num_base_bdevs": 3, 00:30:13.490 "num_base_bdevs_discovered": 2, 00:30:13.490 "num_base_bdevs_operational": 2, 00:30:13.490 "base_bdevs_list": [ 00:30:13.490 { 00:30:13.490 "name": null, 00:30:13.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.490 "is_configured": false, 00:30:13.490 "data_offset": 0, 00:30:13.490 "data_size": 63488 00:30:13.490 }, 00:30:13.490 { 00:30:13.490 "name": "BaseBdev2", 00:30:13.490 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:13.490 "is_configured": true, 00:30:13.490 "data_offset": 2048, 00:30:13.490 "data_size": 63488 00:30:13.490 }, 00:30:13.490 { 00:30:13.490 "name": "BaseBdev3", 00:30:13.490 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:13.490 "is_configured": true, 00:30:13.490 "data_offset": 2048, 00:30:13.490 "data_size": 63488 00:30:13.490 } 00:30:13.490 ] 00:30:13.490 }' 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.490 07:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.057 07:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:14.057 07:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.057 07:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.057 [2024-11-20 07:27:38.196491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:14.057 [2024-11-20 07:27:38.196761] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:14.057 [2024-11-20 07:27:38.196786] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:14.057 [2024-11-20 07:27:38.196896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:14.057 [2024-11-20 07:27:38.211680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:30:14.057 07:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.057 07:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:14.057 [2024-11-20 07:27:38.218355] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:14.997 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:14.998 "name": "raid_bdev1", 00:30:14.998 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:14.998 "strip_size_kb": 64, 00:30:14.998 "state": "online", 00:30:14.998 "raid_level": "raid5f", 00:30:14.998 "superblock": true, 00:30:14.998 "num_base_bdevs": 3, 00:30:14.998 "num_base_bdevs_discovered": 3, 00:30:14.998 "num_base_bdevs_operational": 3, 00:30:14.998 "process": { 00:30:14.998 "type": "rebuild", 00:30:14.998 "target": "spare", 00:30:14.998 "progress": { 00:30:14.998 "blocks": 18432, 00:30:14.998 "percent": 14 00:30:14.998 } 00:30:14.998 }, 00:30:14.998 "base_bdevs_list": [ 00:30:14.998 { 00:30:14.998 "name": "spare", 00:30:14.998 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:14.998 "is_configured": true, 00:30:14.998 "data_offset": 2048, 00:30:14.998 "data_size": 63488 00:30:14.998 }, 00:30:14.998 { 00:30:14.998 "name": "BaseBdev2", 00:30:14.998 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:14.998 "is_configured": true, 00:30:14.998 "data_offset": 2048, 00:30:14.998 "data_size": 63488 00:30:14.998 }, 00:30:14.998 { 00:30:14.998 "name": "BaseBdev3", 00:30:14.998 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:14.998 "is_configured": true, 00:30:14.998 "data_offset": 2048, 00:30:14.998 "data_size": 63488 00:30:14.998 } 00:30:14.998 ] 00:30:14.998 }' 00:30:14.998 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:15.276 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:15.276 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:15.276 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:15.276 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:15.276 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.276 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.276 [2024-11-20 07:27:39.395840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:15.276 [2024-11-20 07:27:39.431187] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:15.276 [2024-11-20 07:27:39.431291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:15.276 [2024-11-20 07:27:39.431316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:15.276 [2024-11-20 07:27:39.431330] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:15.277 "name": "raid_bdev1", 00:30:15.277 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:15.277 "strip_size_kb": 64, 00:30:15.277 "state": "online", 00:30:15.277 "raid_level": "raid5f", 00:30:15.277 "superblock": true, 00:30:15.277 "num_base_bdevs": 3, 00:30:15.277 "num_base_bdevs_discovered": 2, 00:30:15.277 "num_base_bdevs_operational": 2, 00:30:15.277 "base_bdevs_list": [ 00:30:15.277 { 00:30:15.277 "name": null, 00:30:15.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.277 "is_configured": false, 00:30:15.277 "data_offset": 0, 00:30:15.277 "data_size": 63488 00:30:15.277 }, 00:30:15.277 { 00:30:15.277 "name": "BaseBdev2", 00:30:15.277 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:15.277 "is_configured": true, 00:30:15.277 "data_offset": 2048, 00:30:15.277 "data_size": 63488 00:30:15.277 }, 00:30:15.277 { 00:30:15.277 "name": "BaseBdev3", 00:30:15.277 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:15.277 "is_configured": true, 00:30:15.277 "data_offset": 2048, 00:30:15.277 "data_size": 63488 00:30:15.277 } 00:30:15.277 ] 00:30:15.277 }' 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:15.277 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.860 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:15.860 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.860 07:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.860 [2024-11-20 07:27:39.990421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:15.860 [2024-11-20 07:27:39.990543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.860 [2024-11-20 07:27:39.990574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:30:15.860 [2024-11-20 07:27:39.990609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.860 [2024-11-20 07:27:39.991296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.860 [2024-11-20 07:27:39.991360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:15.861 [2024-11-20 07:27:39.991491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:15.861 [2024-11-20 07:27:39.991532] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:15.861 [2024-11-20 07:27:39.991545] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:15.861 [2024-11-20 07:27:39.991609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:15.861 [2024-11-20 07:27:40.004937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:30:15.861 spare 00:30:15.861 07:27:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.861 07:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:15.861 [2024-11-20 07:27:40.011748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:16.796 "name": "raid_bdev1", 00:30:16.796 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:16.796 "strip_size_kb": 64, 00:30:16.796 "state": "online", 00:30:16.796 "raid_level": "raid5f", 00:30:16.796 "superblock": true, 00:30:16.796 "num_base_bdevs": 3, 00:30:16.796 "num_base_bdevs_discovered": 3, 00:30:16.796 "num_base_bdevs_operational": 3, 00:30:16.796 "process": { 00:30:16.796 "type": "rebuild", 00:30:16.796 "target": "spare", 00:30:16.796 "progress": { 00:30:16.796 "blocks": 18432, 00:30:16.796 "percent": 14 00:30:16.796 } 00:30:16.796 }, 00:30:16.796 "base_bdevs_list": [ 00:30:16.796 { 00:30:16.796 "name": "spare", 00:30:16.796 "uuid": "cc14c959-0715-55cd-8b4a-f62369be6a9d", 00:30:16.796 "is_configured": true, 00:30:16.796 "data_offset": 2048, 00:30:16.796 "data_size": 63488 00:30:16.796 }, 00:30:16.796 { 00:30:16.796 "name": "BaseBdev2", 00:30:16.796 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:16.796 "is_configured": true, 00:30:16.796 "data_offset": 2048, 00:30:16.796 "data_size": 63488 00:30:16.796 }, 00:30:16.796 { 00:30:16.796 "name": "BaseBdev3", 00:30:16.796 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:16.796 "is_configured": true, 00:30:16.796 "data_offset": 2048, 00:30:16.796 "data_size": 63488 00:30:16.796 } 00:30:16.796 ] 00:30:16.796 }' 00:30:16.796 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:17.054 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.055 [2024-11-20 07:27:41.181268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.055 [2024-11-20 07:27:41.223886] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:17.055 [2024-11-20 07:27:41.223978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.055 [2024-11-20 07:27:41.224003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.055 [2024-11-20 07:27:41.224014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:17.055 "name": "raid_bdev1", 00:30:17.055 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:17.055 "strip_size_kb": 64, 00:30:17.055 "state": "online", 00:30:17.055 "raid_level": "raid5f", 00:30:17.055 "superblock": true, 00:30:17.055 "num_base_bdevs": 3, 00:30:17.055 "num_base_bdevs_discovered": 2, 00:30:17.055 "num_base_bdevs_operational": 2, 00:30:17.055 "base_bdevs_list": [ 00:30:17.055 { 00:30:17.055 "name": null, 00:30:17.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.055 "is_configured": false, 00:30:17.055 "data_offset": 0, 00:30:17.055 "data_size": 63488 00:30:17.055 }, 00:30:17.055 { 00:30:17.055 "name": "BaseBdev2", 00:30:17.055 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:17.055 "is_configured": true, 00:30:17.055 "data_offset": 2048, 00:30:17.055 "data_size": 63488 00:30:17.055 }, 00:30:17.055 { 00:30:17.055 "name": "BaseBdev3", 00:30:17.055 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:17.055 "is_configured": true, 00:30:17.055 "data_offset": 2048, 00:30:17.055 "data_size": 63488 00:30:17.055 } 00:30:17.055 ] 00:30:17.055 }' 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:17.055 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:17.622 "name": "raid_bdev1", 00:30:17.622 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:17.622 "strip_size_kb": 64, 00:30:17.622 "state": "online", 00:30:17.622 "raid_level": "raid5f", 00:30:17.622 "superblock": true, 00:30:17.622 "num_base_bdevs": 3, 00:30:17.622 "num_base_bdevs_discovered": 2, 00:30:17.622 "num_base_bdevs_operational": 2, 00:30:17.622 "base_bdevs_list": [ 00:30:17.622 { 00:30:17.622 "name": null, 00:30:17.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.622 "is_configured": false, 00:30:17.622 "data_offset": 0, 00:30:17.622 "data_size": 63488 00:30:17.622 }, 00:30:17.622 { 00:30:17.622 "name": "BaseBdev2", 00:30:17.622 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:17.622 "is_configured": true, 00:30:17.622 "data_offset": 2048, 00:30:17.622 "data_size": 63488 00:30:17.622 }, 00:30:17.622 { 00:30:17.622 "name": "BaseBdev3", 00:30:17.622 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:17.622 "is_configured": true, 00:30:17.622 "data_offset": 2048, 00:30:17.622 "data_size": 63488 00:30:17.622 } 00:30:17.622 ] 00:30:17.622 }' 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:17.622 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.885 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.885 [2024-11-20 07:27:41.940343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:17.885 [2024-11-20 07:27:41.940435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.885 [2024-11-20 07:27:41.940469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:17.885 [2024-11-20 07:27:41.940484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.885 [2024-11-20 07:27:41.941159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.885 [2024-11-20 07:27:41.941231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:17.885 [2024-11-20 07:27:41.941357] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:17.886 [2024-11-20 07:27:41.941387] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:17.886 [2024-11-20 07:27:41.941427] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:17.886 [2024-11-20 07:27:41.941440] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:17.886 BaseBdev1 00:30:17.886 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.886 07:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.823 07:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.823 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:18.823 "name": "raid_bdev1", 00:30:18.823 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:18.823 "strip_size_kb": 64, 00:30:18.823 "state": "online", 00:30:18.823 "raid_level": "raid5f", 00:30:18.823 "superblock": true, 00:30:18.823 "num_base_bdevs": 3, 00:30:18.823 "num_base_bdevs_discovered": 2, 00:30:18.823 "num_base_bdevs_operational": 2, 00:30:18.823 "base_bdevs_list": [ 00:30:18.823 { 00:30:18.823 "name": null, 00:30:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.823 "is_configured": false, 00:30:18.823 "data_offset": 0, 00:30:18.823 "data_size": 63488 00:30:18.823 }, 00:30:18.823 { 00:30:18.823 "name": "BaseBdev2", 00:30:18.823 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:18.823 "is_configured": true, 00:30:18.823 "data_offset": 2048, 00:30:18.823 "data_size": 63488 00:30:18.823 }, 00:30:18.823 { 00:30:18.823 "name": "BaseBdev3", 00:30:18.823 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:18.823 "is_configured": true, 00:30:18.823 "data_offset": 2048, 00:30:18.824 "data_size": 63488 00:30:18.824 } 00:30:18.824 ] 00:30:18.824 }' 00:30:18.824 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:18.824 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.390 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:19.390 "name": "raid_bdev1", 00:30:19.390 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:19.390 "strip_size_kb": 64, 00:30:19.390 "state": "online", 00:30:19.390 "raid_level": "raid5f", 00:30:19.390 "superblock": true, 00:30:19.390 "num_base_bdevs": 3, 00:30:19.390 "num_base_bdevs_discovered": 2, 00:30:19.390 "num_base_bdevs_operational": 2, 00:30:19.390 "base_bdevs_list": [ 00:30:19.390 { 00:30:19.390 "name": null, 00:30:19.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.390 "is_configured": false, 00:30:19.390 "data_offset": 0, 00:30:19.390 "data_size": 63488 00:30:19.390 }, 00:30:19.390 { 00:30:19.390 "name": "BaseBdev2", 00:30:19.390 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:19.390 "is_configured": true, 00:30:19.390 "data_offset": 2048, 00:30:19.390 "data_size": 63488 00:30:19.390 }, 00:30:19.390 { 00:30:19.390 "name": "BaseBdev3", 00:30:19.390 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:19.390 "is_configured": true, 00:30:19.390 "data_offset": 2048, 00:30:19.391 "data_size": 63488 00:30:19.391 } 00:30:19.391 ] 00:30:19.391 }' 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.391 [2024-11-20 07:27:43.648847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:19.391 [2024-11-20 07:27:43.649117] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:19.391 [2024-11-20 07:27:43.649163] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:19.391 request: 00:30:19.391 { 00:30:19.391 "base_bdev": "BaseBdev1", 00:30:19.391 "raid_bdev": "raid_bdev1", 00:30:19.391 "method": "bdev_raid_add_base_bdev", 00:30:19.391 "req_id": 1 00:30:19.391 } 00:30:19.391 Got JSON-RPC error response 00:30:19.391 response: 00:30:19.391 { 00:30:19.391 "code": -22, 00:30:19.391 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:19.391 } 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:19.391 07:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:20.768 "name": "raid_bdev1", 00:30:20.768 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:20.768 "strip_size_kb": 64, 00:30:20.768 "state": "online", 00:30:20.768 "raid_level": "raid5f", 00:30:20.768 "superblock": true, 00:30:20.768 "num_base_bdevs": 3, 00:30:20.768 "num_base_bdevs_discovered": 2, 00:30:20.768 "num_base_bdevs_operational": 2, 00:30:20.768 "base_bdevs_list": [ 00:30:20.768 { 00:30:20.768 "name": null, 00:30:20.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.768 "is_configured": false, 00:30:20.768 "data_offset": 0, 00:30:20.768 "data_size": 63488 00:30:20.768 }, 00:30:20.768 { 00:30:20.768 "name": "BaseBdev2", 00:30:20.768 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:20.768 "is_configured": true, 00:30:20.768 "data_offset": 2048, 00:30:20.768 "data_size": 63488 00:30:20.768 }, 00:30:20.768 { 00:30:20.768 "name": "BaseBdev3", 00:30:20.768 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:20.768 "is_configured": true, 00:30:20.768 "data_offset": 2048, 00:30:20.768 "data_size": 63488 00:30:20.768 } 00:30:20.768 ] 00:30:20.768 }' 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:20.768 07:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:21.028 "name": "raid_bdev1", 00:30:21.028 "uuid": "a5df8048-73dc-46c9-94d8-eb9d57da86f3", 00:30:21.028 "strip_size_kb": 64, 00:30:21.028 "state": "online", 00:30:21.028 "raid_level": "raid5f", 00:30:21.028 "superblock": true, 00:30:21.028 "num_base_bdevs": 3, 00:30:21.028 "num_base_bdevs_discovered": 2, 00:30:21.028 "num_base_bdevs_operational": 2, 00:30:21.028 "base_bdevs_list": [ 00:30:21.028 { 00:30:21.028 "name": null, 00:30:21.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.028 "is_configured": false, 00:30:21.028 "data_offset": 0, 00:30:21.028 "data_size": 63488 00:30:21.028 }, 00:30:21.028 { 00:30:21.028 "name": "BaseBdev2", 00:30:21.028 "uuid": "a69d2953-8985-5ea5-beda-f6da841019a5", 00:30:21.028 "is_configured": true, 00:30:21.028 "data_offset": 2048, 00:30:21.028 "data_size": 63488 00:30:21.028 }, 00:30:21.028 { 00:30:21.028 "name": "BaseBdev3", 00:30:21.028 "uuid": "632cf3ea-1df3-5b50-82a1-6290c4bbde95", 00:30:21.028 "is_configured": true, 00:30:21.028 "data_offset": 2048, 00:30:21.028 "data_size": 63488 00:30:21.028 } 00:30:21.028 ] 00:30:21.028 }' 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:21.028 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82580 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82580 ']' 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82580 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82580 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.287 killing process with pid 82580 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82580' 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82580 00:30:21.287 Received shutdown signal, test time was about 60.000000 seconds 00:30:21.287 00:30:21.287 Latency(us) 00:30:21.287 [2024-11-20T07:27:45.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.287 [2024-11-20T07:27:45.576Z] =================================================================================================================== 00:30:21.287 [2024-11-20T07:27:45.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:21.287 [2024-11-20 07:27:45.384915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:21.287 07:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82580 00:30:21.287 [2024-11-20 07:27:45.385085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:21.287 [2024-11-20 07:27:45.385164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:21.287 [2024-11-20 07:27:45.385200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:21.546 [2024-11-20 07:27:45.693483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:22.515 07:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:30:22.515 00:30:22.515 real 0m24.619s 00:30:22.515 user 0m32.939s 00:30:22.515 sys 0m2.556s 00:30:22.515 07:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.515 07:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.515 ************************************ 00:30:22.515 END TEST raid5f_rebuild_test_sb 00:30:22.515 ************************************ 00:30:22.515 07:27:46 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:30:22.515 07:27:46 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:30:22.515 07:27:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:22.515 07:27:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.515 07:27:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:22.515 ************************************ 00:30:22.515 START TEST raid5f_state_function_test 00:30:22.515 ************************************ 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:22.515 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83345 00:30:22.516 Process raid pid: 83345 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83345' 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83345 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83345 ']' 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.516 07:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.516 [2024-11-20 07:27:46.770668] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:22.516 [2024-11-20 07:27:46.770860] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.775 [2024-11-20 07:27:46.954898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.033 [2024-11-20 07:27:47.069183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.033 [2024-11-20 07:27:47.259234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:23.033 [2024-11-20 07:27:47.259327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.601 [2024-11-20 07:27:47.741466] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:23.601 [2024-11-20 07:27:47.741555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:23.601 [2024-11-20 07:27:47.741571] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:23.601 [2024-11-20 07:27:47.741587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:23.601 [2024-11-20 07:27:47.741623] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:23.601 [2024-11-20 07:27:47.741639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:23.601 [2024-11-20 07:27:47.741648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:23.601 [2024-11-20 07:27:47.741694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:23.601 "name": "Existed_Raid", 00:30:23.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.601 "strip_size_kb": 64, 00:30:23.601 "state": "configuring", 00:30:23.601 "raid_level": "raid5f", 00:30:23.601 "superblock": false, 00:30:23.601 "num_base_bdevs": 4, 00:30:23.601 "num_base_bdevs_discovered": 0, 00:30:23.601 "num_base_bdevs_operational": 4, 00:30:23.601 "base_bdevs_list": [ 00:30:23.601 { 00:30:23.601 "name": "BaseBdev1", 00:30:23.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.601 "is_configured": false, 00:30:23.601 "data_offset": 0, 00:30:23.601 "data_size": 0 00:30:23.601 }, 00:30:23.601 { 00:30:23.601 "name": "BaseBdev2", 00:30:23.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.601 "is_configured": false, 00:30:23.601 "data_offset": 0, 00:30:23.601 "data_size": 0 00:30:23.601 }, 00:30:23.601 { 00:30:23.601 "name": "BaseBdev3", 00:30:23.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.601 "is_configured": false, 00:30:23.601 "data_offset": 0, 00:30:23.601 "data_size": 0 00:30:23.601 }, 00:30:23.601 { 00:30:23.601 "name": "BaseBdev4", 00:30:23.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.601 "is_configured": false, 00:30:23.601 "data_offset": 0, 00:30:23.601 "data_size": 0 00:30:23.601 } 00:30:23.601 ] 00:30:23.601 }' 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:23.601 07:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.169 [2024-11-20 07:27:48.245562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:24.169 [2024-11-20 07:27:48.245649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.169 [2024-11-20 07:27:48.253551] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:24.169 [2024-11-20 07:27:48.253658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:24.169 [2024-11-20 07:27:48.253674] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:24.169 [2024-11-20 07:27:48.253690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:24.169 [2024-11-20 07:27:48.253700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:24.169 [2024-11-20 07:27:48.253714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:24.169 [2024-11-20 07:27:48.253723] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:24.169 [2024-11-20 07:27:48.253737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.169 [2024-11-20 07:27:48.296241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:24.169 BaseBdev1 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:24.169 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.170 [ 00:30:24.170 { 00:30:24.170 "name": "BaseBdev1", 00:30:24.170 "aliases": [ 00:30:24.170 "f0b8fc34-de18-482b-94be-c69d7c4077ff" 00:30:24.170 ], 00:30:24.170 "product_name": "Malloc disk", 00:30:24.170 "block_size": 512, 00:30:24.170 "num_blocks": 65536, 00:30:24.170 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:24.170 "assigned_rate_limits": { 00:30:24.170 "rw_ios_per_sec": 0, 00:30:24.170 "rw_mbytes_per_sec": 0, 00:30:24.170 "r_mbytes_per_sec": 0, 00:30:24.170 "w_mbytes_per_sec": 0 00:30:24.170 }, 00:30:24.170 "claimed": true, 00:30:24.170 "claim_type": "exclusive_write", 00:30:24.170 "zoned": false, 00:30:24.170 "supported_io_types": { 00:30:24.170 "read": true, 00:30:24.170 "write": true, 00:30:24.170 "unmap": true, 00:30:24.170 "flush": true, 00:30:24.170 "reset": true, 00:30:24.170 "nvme_admin": false, 00:30:24.170 "nvme_io": false, 00:30:24.170 "nvme_io_md": false, 00:30:24.170 "write_zeroes": true, 00:30:24.170 "zcopy": true, 00:30:24.170 "get_zone_info": false, 00:30:24.170 "zone_management": false, 00:30:24.170 "zone_append": false, 00:30:24.170 "compare": false, 00:30:24.170 "compare_and_write": false, 00:30:24.170 "abort": true, 00:30:24.170 "seek_hole": false, 00:30:24.170 "seek_data": false, 00:30:24.170 "copy": true, 00:30:24.170 "nvme_iov_md": false 00:30:24.170 }, 00:30:24.170 "memory_domains": [ 00:30:24.170 { 00:30:24.170 "dma_device_id": "system", 00:30:24.170 "dma_device_type": 1 00:30:24.170 }, 00:30:24.170 { 00:30:24.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:24.170 "dma_device_type": 2 00:30:24.170 } 00:30:24.170 ], 00:30:24.170 "driver_specific": {} 00:30:24.170 } 00:30:24.170 ] 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.170 "name": "Existed_Raid", 00:30:24.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.170 "strip_size_kb": 64, 00:30:24.170 "state": "configuring", 00:30:24.170 "raid_level": "raid5f", 00:30:24.170 "superblock": false, 00:30:24.170 "num_base_bdevs": 4, 00:30:24.170 "num_base_bdevs_discovered": 1, 00:30:24.170 "num_base_bdevs_operational": 4, 00:30:24.170 "base_bdevs_list": [ 00:30:24.170 { 00:30:24.170 "name": "BaseBdev1", 00:30:24.170 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:24.170 "is_configured": true, 00:30:24.170 "data_offset": 0, 00:30:24.170 "data_size": 65536 00:30:24.170 }, 00:30:24.170 { 00:30:24.170 "name": "BaseBdev2", 00:30:24.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.170 "is_configured": false, 00:30:24.170 "data_offset": 0, 00:30:24.170 "data_size": 0 00:30:24.170 }, 00:30:24.170 { 00:30:24.170 "name": "BaseBdev3", 00:30:24.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.170 "is_configured": false, 00:30:24.170 "data_offset": 0, 00:30:24.170 "data_size": 0 00:30:24.170 }, 00:30:24.170 { 00:30:24.170 "name": "BaseBdev4", 00:30:24.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.170 "is_configured": false, 00:30:24.170 "data_offset": 0, 00:30:24.170 "data_size": 0 00:30:24.170 } 00:30:24.170 ] 00:30:24.170 }' 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.170 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.738 [2024-11-20 07:27:48.864530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:24.738 [2024-11-20 07:27:48.864636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.738 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.738 [2024-11-20 07:27:48.872547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:24.739 [2024-11-20 07:27:48.875218] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:24.739 [2024-11-20 07:27:48.875275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:24.739 [2024-11-20 07:27:48.875302] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:24.739 [2024-11-20 07:27:48.875330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:24.739 [2024-11-20 07:27:48.875340] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:24.739 [2024-11-20 07:27:48.875353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.739 "name": "Existed_Raid", 00:30:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.739 "strip_size_kb": 64, 00:30:24.739 "state": "configuring", 00:30:24.739 "raid_level": "raid5f", 00:30:24.739 "superblock": false, 00:30:24.739 "num_base_bdevs": 4, 00:30:24.739 "num_base_bdevs_discovered": 1, 00:30:24.739 "num_base_bdevs_operational": 4, 00:30:24.739 "base_bdevs_list": [ 00:30:24.739 { 00:30:24.739 "name": "BaseBdev1", 00:30:24.739 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:24.739 "is_configured": true, 00:30:24.739 "data_offset": 0, 00:30:24.739 "data_size": 65536 00:30:24.739 }, 00:30:24.739 { 00:30:24.739 "name": "BaseBdev2", 00:30:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.739 "is_configured": false, 00:30:24.739 "data_offset": 0, 00:30:24.739 "data_size": 0 00:30:24.739 }, 00:30:24.739 { 00:30:24.739 "name": "BaseBdev3", 00:30:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.739 "is_configured": false, 00:30:24.739 "data_offset": 0, 00:30:24.739 "data_size": 0 00:30:24.739 }, 00:30:24.739 { 00:30:24.739 "name": "BaseBdev4", 00:30:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.739 "is_configured": false, 00:30:24.739 "data_offset": 0, 00:30:24.739 "data_size": 0 00:30:24.739 } 00:30:24.739 ] 00:30:24.739 }' 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.739 07:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.309 [2024-11-20 07:27:49.444385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:25.309 BaseBdev2 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.309 [ 00:30:25.309 { 00:30:25.309 "name": "BaseBdev2", 00:30:25.309 "aliases": [ 00:30:25.309 "e26e30b9-40d5-43a2-b8d2-5b91671d2b97" 00:30:25.309 ], 00:30:25.309 "product_name": "Malloc disk", 00:30:25.309 "block_size": 512, 00:30:25.309 "num_blocks": 65536, 00:30:25.309 "uuid": "e26e30b9-40d5-43a2-b8d2-5b91671d2b97", 00:30:25.309 "assigned_rate_limits": { 00:30:25.309 "rw_ios_per_sec": 0, 00:30:25.309 "rw_mbytes_per_sec": 0, 00:30:25.309 "r_mbytes_per_sec": 0, 00:30:25.309 "w_mbytes_per_sec": 0 00:30:25.309 }, 00:30:25.309 "claimed": true, 00:30:25.309 "claim_type": "exclusive_write", 00:30:25.309 "zoned": false, 00:30:25.309 "supported_io_types": { 00:30:25.309 "read": true, 00:30:25.309 "write": true, 00:30:25.309 "unmap": true, 00:30:25.309 "flush": true, 00:30:25.309 "reset": true, 00:30:25.309 "nvme_admin": false, 00:30:25.309 "nvme_io": false, 00:30:25.309 "nvme_io_md": false, 00:30:25.309 "write_zeroes": true, 00:30:25.309 "zcopy": true, 00:30:25.309 "get_zone_info": false, 00:30:25.309 "zone_management": false, 00:30:25.309 "zone_append": false, 00:30:25.309 "compare": false, 00:30:25.309 "compare_and_write": false, 00:30:25.309 "abort": true, 00:30:25.309 "seek_hole": false, 00:30:25.309 "seek_data": false, 00:30:25.309 "copy": true, 00:30:25.309 "nvme_iov_md": false 00:30:25.309 }, 00:30:25.309 "memory_domains": [ 00:30:25.309 { 00:30:25.309 "dma_device_id": "system", 00:30:25.309 "dma_device_type": 1 00:30:25.309 }, 00:30:25.309 { 00:30:25.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:25.309 "dma_device_type": 2 00:30:25.309 } 00:30:25.309 ], 00:30:25.309 "driver_specific": {} 00:30:25.309 } 00:30:25.309 ] 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.309 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.310 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.310 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.310 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.310 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.310 "name": "Existed_Raid", 00:30:25.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.310 "strip_size_kb": 64, 00:30:25.310 "state": "configuring", 00:30:25.310 "raid_level": "raid5f", 00:30:25.310 "superblock": false, 00:30:25.310 "num_base_bdevs": 4, 00:30:25.310 "num_base_bdevs_discovered": 2, 00:30:25.310 "num_base_bdevs_operational": 4, 00:30:25.310 "base_bdevs_list": [ 00:30:25.310 { 00:30:25.310 "name": "BaseBdev1", 00:30:25.310 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:25.310 "is_configured": true, 00:30:25.310 "data_offset": 0, 00:30:25.310 "data_size": 65536 00:30:25.310 }, 00:30:25.310 { 00:30:25.310 "name": "BaseBdev2", 00:30:25.310 "uuid": "e26e30b9-40d5-43a2-b8d2-5b91671d2b97", 00:30:25.310 "is_configured": true, 00:30:25.310 "data_offset": 0, 00:30:25.310 "data_size": 65536 00:30:25.310 }, 00:30:25.310 { 00:30:25.310 "name": "BaseBdev3", 00:30:25.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.310 "is_configured": false, 00:30:25.310 "data_offset": 0, 00:30:25.310 "data_size": 0 00:30:25.310 }, 00:30:25.310 { 00:30:25.310 "name": "BaseBdev4", 00:30:25.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.310 "is_configured": false, 00:30:25.310 "data_offset": 0, 00:30:25.310 "data_size": 0 00:30:25.310 } 00:30:25.310 ] 00:30:25.310 }' 00:30:25.310 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.310 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.878 07:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:25.878 07:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.878 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.878 [2024-11-20 07:27:50.052038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:25.878 BaseBdev3 00:30:25.878 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.879 [ 00:30:25.879 { 00:30:25.879 "name": "BaseBdev3", 00:30:25.879 "aliases": [ 00:30:25.879 "7136976a-13af-4201-844d-11e8d31d23fd" 00:30:25.879 ], 00:30:25.879 "product_name": "Malloc disk", 00:30:25.879 "block_size": 512, 00:30:25.879 "num_blocks": 65536, 00:30:25.879 "uuid": "7136976a-13af-4201-844d-11e8d31d23fd", 00:30:25.879 "assigned_rate_limits": { 00:30:25.879 "rw_ios_per_sec": 0, 00:30:25.879 "rw_mbytes_per_sec": 0, 00:30:25.879 "r_mbytes_per_sec": 0, 00:30:25.879 "w_mbytes_per_sec": 0 00:30:25.879 }, 00:30:25.879 "claimed": true, 00:30:25.879 "claim_type": "exclusive_write", 00:30:25.879 "zoned": false, 00:30:25.879 "supported_io_types": { 00:30:25.879 "read": true, 00:30:25.879 "write": true, 00:30:25.879 "unmap": true, 00:30:25.879 "flush": true, 00:30:25.879 "reset": true, 00:30:25.879 "nvme_admin": false, 00:30:25.879 "nvme_io": false, 00:30:25.879 "nvme_io_md": false, 00:30:25.879 "write_zeroes": true, 00:30:25.879 "zcopy": true, 00:30:25.879 "get_zone_info": false, 00:30:25.879 "zone_management": false, 00:30:25.879 "zone_append": false, 00:30:25.879 "compare": false, 00:30:25.879 "compare_and_write": false, 00:30:25.879 "abort": true, 00:30:25.879 "seek_hole": false, 00:30:25.879 "seek_data": false, 00:30:25.879 "copy": true, 00:30:25.879 "nvme_iov_md": false 00:30:25.879 }, 00:30:25.879 "memory_domains": [ 00:30:25.879 { 00:30:25.879 "dma_device_id": "system", 00:30:25.879 "dma_device_type": 1 00:30:25.879 }, 00:30:25.879 { 00:30:25.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:25.879 "dma_device_type": 2 00:30:25.879 } 00:30:25.879 ], 00:30:25.879 "driver_specific": {} 00:30:25.879 } 00:30:25.879 ] 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.879 "name": "Existed_Raid", 00:30:25.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.879 "strip_size_kb": 64, 00:30:25.879 "state": "configuring", 00:30:25.879 "raid_level": "raid5f", 00:30:25.879 "superblock": false, 00:30:25.879 "num_base_bdevs": 4, 00:30:25.879 "num_base_bdevs_discovered": 3, 00:30:25.879 "num_base_bdevs_operational": 4, 00:30:25.879 "base_bdevs_list": [ 00:30:25.879 { 00:30:25.879 "name": "BaseBdev1", 00:30:25.879 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:25.879 "is_configured": true, 00:30:25.879 "data_offset": 0, 00:30:25.879 "data_size": 65536 00:30:25.879 }, 00:30:25.879 { 00:30:25.879 "name": "BaseBdev2", 00:30:25.879 "uuid": "e26e30b9-40d5-43a2-b8d2-5b91671d2b97", 00:30:25.879 "is_configured": true, 00:30:25.879 "data_offset": 0, 00:30:25.879 "data_size": 65536 00:30:25.879 }, 00:30:25.879 { 00:30:25.879 "name": "BaseBdev3", 00:30:25.879 "uuid": "7136976a-13af-4201-844d-11e8d31d23fd", 00:30:25.879 "is_configured": true, 00:30:25.879 "data_offset": 0, 00:30:25.879 "data_size": 65536 00:30:25.879 }, 00:30:25.879 { 00:30:25.879 "name": "BaseBdev4", 00:30:25.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.879 "is_configured": false, 00:30:25.879 "data_offset": 0, 00:30:25.879 "data_size": 0 00:30:25.879 } 00:30:25.879 ] 00:30:25.879 }' 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.879 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.447 [2024-11-20 07:27:50.671863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:26.447 [2024-11-20 07:27:50.671966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:26.447 [2024-11-20 07:27:50.671980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:26.447 [2024-11-20 07:27:50.672262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:26.447 [2024-11-20 07:27:50.679195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:26.447 [2024-11-20 07:27:50.679226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:26.447 [2024-11-20 07:27:50.679685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.447 BaseBdev4 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.447 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.447 [ 00:30:26.447 { 00:30:26.448 "name": "BaseBdev4", 00:30:26.448 "aliases": [ 00:30:26.448 "848b7251-2b4b-4f98-8a29-5eb4284fb8fd" 00:30:26.448 ], 00:30:26.448 "product_name": "Malloc disk", 00:30:26.448 "block_size": 512, 00:30:26.448 "num_blocks": 65536, 00:30:26.448 "uuid": "848b7251-2b4b-4f98-8a29-5eb4284fb8fd", 00:30:26.448 "assigned_rate_limits": { 00:30:26.448 "rw_ios_per_sec": 0, 00:30:26.448 "rw_mbytes_per_sec": 0, 00:30:26.448 "r_mbytes_per_sec": 0, 00:30:26.448 "w_mbytes_per_sec": 0 00:30:26.448 }, 00:30:26.448 "claimed": true, 00:30:26.448 "claim_type": "exclusive_write", 00:30:26.448 "zoned": false, 00:30:26.448 "supported_io_types": { 00:30:26.448 "read": true, 00:30:26.448 "write": true, 00:30:26.448 "unmap": true, 00:30:26.448 "flush": true, 00:30:26.448 "reset": true, 00:30:26.448 "nvme_admin": false, 00:30:26.448 "nvme_io": false, 00:30:26.448 "nvme_io_md": false, 00:30:26.448 "write_zeroes": true, 00:30:26.448 "zcopy": true, 00:30:26.448 "get_zone_info": false, 00:30:26.448 "zone_management": false, 00:30:26.448 "zone_append": false, 00:30:26.448 "compare": false, 00:30:26.448 "compare_and_write": false, 00:30:26.448 "abort": true, 00:30:26.448 "seek_hole": false, 00:30:26.448 "seek_data": false, 00:30:26.448 "copy": true, 00:30:26.448 "nvme_iov_md": false 00:30:26.448 }, 00:30:26.448 "memory_domains": [ 00:30:26.448 { 00:30:26.448 "dma_device_id": "system", 00:30:26.448 "dma_device_type": 1 00:30:26.448 }, 00:30:26.448 { 00:30:26.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.448 "dma_device_type": 2 00:30:26.448 } 00:30:26.448 ], 00:30:26.448 "driver_specific": {} 00:30:26.448 } 00:30:26.448 ] 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.448 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.707 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.707 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.707 "name": "Existed_Raid", 00:30:26.707 "uuid": "c3d3c38c-7434-4dcf-a1f7-0fab17c1cb8b", 00:30:26.707 "strip_size_kb": 64, 00:30:26.707 "state": "online", 00:30:26.707 "raid_level": "raid5f", 00:30:26.707 "superblock": false, 00:30:26.707 "num_base_bdevs": 4, 00:30:26.707 "num_base_bdevs_discovered": 4, 00:30:26.707 "num_base_bdevs_operational": 4, 00:30:26.707 "base_bdevs_list": [ 00:30:26.707 { 00:30:26.707 "name": "BaseBdev1", 00:30:26.707 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:26.707 "is_configured": true, 00:30:26.707 "data_offset": 0, 00:30:26.707 "data_size": 65536 00:30:26.707 }, 00:30:26.707 { 00:30:26.707 "name": "BaseBdev2", 00:30:26.707 "uuid": "e26e30b9-40d5-43a2-b8d2-5b91671d2b97", 00:30:26.707 "is_configured": true, 00:30:26.707 "data_offset": 0, 00:30:26.707 "data_size": 65536 00:30:26.707 }, 00:30:26.707 { 00:30:26.707 "name": "BaseBdev3", 00:30:26.707 "uuid": "7136976a-13af-4201-844d-11e8d31d23fd", 00:30:26.707 "is_configured": true, 00:30:26.707 "data_offset": 0, 00:30:26.707 "data_size": 65536 00:30:26.707 }, 00:30:26.707 { 00:30:26.707 "name": "BaseBdev4", 00:30:26.707 "uuid": "848b7251-2b4b-4f98-8a29-5eb4284fb8fd", 00:30:26.707 "is_configured": true, 00:30:26.707 "data_offset": 0, 00:30:26.707 "data_size": 65536 00:30:26.707 } 00:30:26.707 ] 00:30:26.707 }' 00:30:26.707 07:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.707 07:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:27.274 [2024-11-20 07:27:51.279882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.274 "name": "Existed_Raid", 00:30:27.274 "aliases": [ 00:30:27.274 "c3d3c38c-7434-4dcf-a1f7-0fab17c1cb8b" 00:30:27.274 ], 00:30:27.274 "product_name": "Raid Volume", 00:30:27.274 "block_size": 512, 00:30:27.274 "num_blocks": 196608, 00:30:27.274 "uuid": "c3d3c38c-7434-4dcf-a1f7-0fab17c1cb8b", 00:30:27.274 "assigned_rate_limits": { 00:30:27.274 "rw_ios_per_sec": 0, 00:30:27.274 "rw_mbytes_per_sec": 0, 00:30:27.274 "r_mbytes_per_sec": 0, 00:30:27.274 "w_mbytes_per_sec": 0 00:30:27.274 }, 00:30:27.274 "claimed": false, 00:30:27.274 "zoned": false, 00:30:27.274 "supported_io_types": { 00:30:27.274 "read": true, 00:30:27.274 "write": true, 00:30:27.274 "unmap": false, 00:30:27.274 "flush": false, 00:30:27.274 "reset": true, 00:30:27.274 "nvme_admin": false, 00:30:27.274 "nvme_io": false, 00:30:27.274 "nvme_io_md": false, 00:30:27.274 "write_zeroes": true, 00:30:27.274 "zcopy": false, 00:30:27.274 "get_zone_info": false, 00:30:27.274 "zone_management": false, 00:30:27.274 "zone_append": false, 00:30:27.274 "compare": false, 00:30:27.274 "compare_and_write": false, 00:30:27.274 "abort": false, 00:30:27.274 "seek_hole": false, 00:30:27.274 "seek_data": false, 00:30:27.274 "copy": false, 00:30:27.274 "nvme_iov_md": false 00:30:27.274 }, 00:30:27.274 "driver_specific": { 00:30:27.274 "raid": { 00:30:27.274 "uuid": "c3d3c38c-7434-4dcf-a1f7-0fab17c1cb8b", 00:30:27.274 "strip_size_kb": 64, 00:30:27.274 "state": "online", 00:30:27.274 "raid_level": "raid5f", 00:30:27.274 "superblock": false, 00:30:27.274 "num_base_bdevs": 4, 00:30:27.274 "num_base_bdevs_discovered": 4, 00:30:27.274 "num_base_bdevs_operational": 4, 00:30:27.274 "base_bdevs_list": [ 00:30:27.274 { 00:30:27.274 "name": "BaseBdev1", 00:30:27.274 "uuid": "f0b8fc34-de18-482b-94be-c69d7c4077ff", 00:30:27.274 "is_configured": true, 00:30:27.274 "data_offset": 0, 00:30:27.274 "data_size": 65536 00:30:27.274 }, 00:30:27.274 { 00:30:27.274 "name": "BaseBdev2", 00:30:27.274 "uuid": "e26e30b9-40d5-43a2-b8d2-5b91671d2b97", 00:30:27.274 "is_configured": true, 00:30:27.274 "data_offset": 0, 00:30:27.274 "data_size": 65536 00:30:27.274 }, 00:30:27.274 { 00:30:27.274 "name": "BaseBdev3", 00:30:27.274 "uuid": "7136976a-13af-4201-844d-11e8d31d23fd", 00:30:27.274 "is_configured": true, 00:30:27.274 "data_offset": 0, 00:30:27.274 "data_size": 65536 00:30:27.274 }, 00:30:27.274 { 00:30:27.274 "name": "BaseBdev4", 00:30:27.274 "uuid": "848b7251-2b4b-4f98-8a29-5eb4284fb8fd", 00:30:27.274 "is_configured": true, 00:30:27.274 "data_offset": 0, 00:30:27.274 "data_size": 65536 00:30:27.274 } 00:30:27.274 ] 00:30:27.274 } 00:30:27.274 } 00:30:27.274 }' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:27.274 BaseBdev2 00:30:27.274 BaseBdev3 00:30:27.274 BaseBdev4' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.533 [2024-11-20 07:27:51.647794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:30:27.533 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.534 "name": "Existed_Raid", 00:30:27.534 "uuid": "c3d3c38c-7434-4dcf-a1f7-0fab17c1cb8b", 00:30:27.534 "strip_size_kb": 64, 00:30:27.534 "state": "online", 00:30:27.534 "raid_level": "raid5f", 00:30:27.534 "superblock": false, 00:30:27.534 "num_base_bdevs": 4, 00:30:27.534 "num_base_bdevs_discovered": 3, 00:30:27.534 "num_base_bdevs_operational": 3, 00:30:27.534 "base_bdevs_list": [ 00:30:27.534 { 00:30:27.534 "name": null, 00:30:27.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.534 "is_configured": false, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 }, 00:30:27.534 { 00:30:27.534 "name": "BaseBdev2", 00:30:27.534 "uuid": "e26e30b9-40d5-43a2-b8d2-5b91671d2b97", 00:30:27.534 "is_configured": true, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 }, 00:30:27.534 { 00:30:27.534 "name": "BaseBdev3", 00:30:27.534 "uuid": "7136976a-13af-4201-844d-11e8d31d23fd", 00:30:27.534 "is_configured": true, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 }, 00:30:27.534 { 00:30:27.534 "name": "BaseBdev4", 00:30:27.534 "uuid": "848b7251-2b4b-4f98-8a29-5eb4284fb8fd", 00:30:27.534 "is_configured": true, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 } 00:30:27.534 ] 00:30:27.534 }' 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.534 07:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.101 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.101 [2024-11-20 07:27:52.335740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:28.101 [2024-11-20 07:27:52.335881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:28.361 [2024-11-20 07:27:52.409080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 [2024-11-20 07:27:52.473107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.361 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 [2024-11-20 07:27:52.612180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:28.361 [2024-11-20 07:27:52.612254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 BaseBdev2 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 [ 00:30:28.619 { 00:30:28.619 "name": "BaseBdev2", 00:30:28.619 "aliases": [ 00:30:28.619 "fa664bf9-3892-44c2-8c0c-f75c33f5e252" 00:30:28.619 ], 00:30:28.619 "product_name": "Malloc disk", 00:30:28.619 "block_size": 512, 00:30:28.619 "num_blocks": 65536, 00:30:28.619 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:28.619 "assigned_rate_limits": { 00:30:28.619 "rw_ios_per_sec": 0, 00:30:28.619 "rw_mbytes_per_sec": 0, 00:30:28.619 "r_mbytes_per_sec": 0, 00:30:28.619 "w_mbytes_per_sec": 0 00:30:28.619 }, 00:30:28.619 "claimed": false, 00:30:28.619 "zoned": false, 00:30:28.619 "supported_io_types": { 00:30:28.619 "read": true, 00:30:28.619 "write": true, 00:30:28.619 "unmap": true, 00:30:28.619 "flush": true, 00:30:28.619 "reset": true, 00:30:28.619 "nvme_admin": false, 00:30:28.619 "nvme_io": false, 00:30:28.619 "nvme_io_md": false, 00:30:28.619 "write_zeroes": true, 00:30:28.619 "zcopy": true, 00:30:28.619 "get_zone_info": false, 00:30:28.619 "zone_management": false, 00:30:28.619 "zone_append": false, 00:30:28.619 "compare": false, 00:30:28.619 "compare_and_write": false, 00:30:28.619 "abort": true, 00:30:28.619 "seek_hole": false, 00:30:28.619 "seek_data": false, 00:30:28.619 "copy": true, 00:30:28.619 "nvme_iov_md": false 00:30:28.619 }, 00:30:28.619 "memory_domains": [ 00:30:28.619 { 00:30:28.619 "dma_device_id": "system", 00:30:28.619 "dma_device_type": 1 00:30:28.619 }, 00:30:28.619 { 00:30:28.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.619 "dma_device_type": 2 00:30:28.619 } 00:30:28.619 ], 00:30:28.619 "driver_specific": {} 00:30:28.619 } 00:30:28.619 ] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 BaseBdev3 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.619 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.619 [ 00:30:28.619 { 00:30:28.619 "name": "BaseBdev3", 00:30:28.619 "aliases": [ 00:30:28.619 "7cb4fbfd-15ae-41a9-8e39-2b83db653257" 00:30:28.619 ], 00:30:28.619 "product_name": "Malloc disk", 00:30:28.619 "block_size": 512, 00:30:28.619 "num_blocks": 65536, 00:30:28.619 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:28.619 "assigned_rate_limits": { 00:30:28.619 "rw_ios_per_sec": 0, 00:30:28.619 "rw_mbytes_per_sec": 0, 00:30:28.619 "r_mbytes_per_sec": 0, 00:30:28.619 "w_mbytes_per_sec": 0 00:30:28.619 }, 00:30:28.619 "claimed": false, 00:30:28.619 "zoned": false, 00:30:28.619 "supported_io_types": { 00:30:28.619 "read": true, 00:30:28.619 "write": true, 00:30:28.619 "unmap": true, 00:30:28.619 "flush": true, 00:30:28.619 "reset": true, 00:30:28.619 "nvme_admin": false, 00:30:28.619 "nvme_io": false, 00:30:28.619 "nvme_io_md": false, 00:30:28.619 "write_zeroes": true, 00:30:28.619 "zcopy": true, 00:30:28.619 "get_zone_info": false, 00:30:28.619 "zone_management": false, 00:30:28.619 "zone_append": false, 00:30:28.619 "compare": false, 00:30:28.619 "compare_and_write": false, 00:30:28.619 "abort": true, 00:30:28.619 "seek_hole": false, 00:30:28.619 "seek_data": false, 00:30:28.620 "copy": true, 00:30:28.620 "nvme_iov_md": false 00:30:28.620 }, 00:30:28.620 "memory_domains": [ 00:30:28.620 { 00:30:28.620 "dma_device_id": "system", 00:30:28.620 "dma_device_type": 1 00:30:28.620 }, 00:30:28.620 { 00:30:28.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.620 "dma_device_type": 2 00:30:28.620 } 00:30:28.620 ], 00:30:28.620 "driver_specific": {} 00:30:28.620 } 00:30:28.620 ] 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.620 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 BaseBdev4 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 [ 00:30:28.878 { 00:30:28.878 "name": "BaseBdev4", 00:30:28.878 "aliases": [ 00:30:28.878 "5348cec4-fa57-4e08-97b5-4d53730ac627" 00:30:28.878 ], 00:30:28.878 "product_name": "Malloc disk", 00:30:28.878 "block_size": 512, 00:30:28.878 "num_blocks": 65536, 00:30:28.878 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:28.878 "assigned_rate_limits": { 00:30:28.878 "rw_ios_per_sec": 0, 00:30:28.878 "rw_mbytes_per_sec": 0, 00:30:28.878 "r_mbytes_per_sec": 0, 00:30:28.878 "w_mbytes_per_sec": 0 00:30:28.878 }, 00:30:28.878 "claimed": false, 00:30:28.878 "zoned": false, 00:30:28.878 "supported_io_types": { 00:30:28.878 "read": true, 00:30:28.878 "write": true, 00:30:28.878 "unmap": true, 00:30:28.878 "flush": true, 00:30:28.878 "reset": true, 00:30:28.878 "nvme_admin": false, 00:30:28.878 "nvme_io": false, 00:30:28.878 "nvme_io_md": false, 00:30:28.878 "write_zeroes": true, 00:30:28.878 "zcopy": true, 00:30:28.878 "get_zone_info": false, 00:30:28.878 "zone_management": false, 00:30:28.878 "zone_append": false, 00:30:28.878 "compare": false, 00:30:28.878 "compare_and_write": false, 00:30:28.878 "abort": true, 00:30:28.878 "seek_hole": false, 00:30:28.878 "seek_data": false, 00:30:28.878 "copy": true, 00:30:28.878 "nvme_iov_md": false 00:30:28.878 }, 00:30:28.878 "memory_domains": [ 00:30:28.878 { 00:30:28.878 "dma_device_id": "system", 00:30:28.878 "dma_device_type": 1 00:30:28.878 }, 00:30:28.878 { 00:30:28.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.878 "dma_device_type": 2 00:30:28.878 } 00:30:28.878 ], 00:30:28.878 "driver_specific": {} 00:30:28.878 } 00:30:28.878 ] 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 [2024-11-20 07:27:52.961253] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:28.878 [2024-11-20 07:27:52.961302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:28.878 [2024-11-20 07:27:52.961346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:28.878 [2024-11-20 07:27:52.963869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:28.878 [2024-11-20 07:27:52.963969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.878 07:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.878 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.878 "name": "Existed_Raid", 00:30:28.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.878 "strip_size_kb": 64, 00:30:28.878 "state": "configuring", 00:30:28.878 "raid_level": "raid5f", 00:30:28.878 "superblock": false, 00:30:28.878 "num_base_bdevs": 4, 00:30:28.878 "num_base_bdevs_discovered": 3, 00:30:28.878 "num_base_bdevs_operational": 4, 00:30:28.878 "base_bdevs_list": [ 00:30:28.878 { 00:30:28.878 "name": "BaseBdev1", 00:30:28.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.878 "is_configured": false, 00:30:28.878 "data_offset": 0, 00:30:28.878 "data_size": 0 00:30:28.878 }, 00:30:28.878 { 00:30:28.878 "name": "BaseBdev2", 00:30:28.878 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:28.878 "is_configured": true, 00:30:28.878 "data_offset": 0, 00:30:28.878 "data_size": 65536 00:30:28.878 }, 00:30:28.878 { 00:30:28.878 "name": "BaseBdev3", 00:30:28.878 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:28.878 "is_configured": true, 00:30:28.878 "data_offset": 0, 00:30:28.878 "data_size": 65536 00:30:28.878 }, 00:30:28.878 { 00:30:28.878 "name": "BaseBdev4", 00:30:28.878 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:28.878 "is_configured": true, 00:30:28.878 "data_offset": 0, 00:30:28.878 "data_size": 65536 00:30:28.878 } 00:30:28.878 ] 00:30:28.878 }' 00:30:28.878 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.878 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.446 [2024-11-20 07:27:53.489451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.446 "name": "Existed_Raid", 00:30:29.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.446 "strip_size_kb": 64, 00:30:29.446 "state": "configuring", 00:30:29.446 "raid_level": "raid5f", 00:30:29.446 "superblock": false, 00:30:29.446 "num_base_bdevs": 4, 00:30:29.446 "num_base_bdevs_discovered": 2, 00:30:29.446 "num_base_bdevs_operational": 4, 00:30:29.446 "base_bdevs_list": [ 00:30:29.446 { 00:30:29.446 "name": "BaseBdev1", 00:30:29.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.446 "is_configured": false, 00:30:29.446 "data_offset": 0, 00:30:29.446 "data_size": 0 00:30:29.446 }, 00:30:29.446 { 00:30:29.446 "name": null, 00:30:29.446 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:29.446 "is_configured": false, 00:30:29.446 "data_offset": 0, 00:30:29.446 "data_size": 65536 00:30:29.446 }, 00:30:29.446 { 00:30:29.446 "name": "BaseBdev3", 00:30:29.446 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:29.446 "is_configured": true, 00:30:29.446 "data_offset": 0, 00:30:29.446 "data_size": 65536 00:30:29.446 }, 00:30:29.446 { 00:30:29.446 "name": "BaseBdev4", 00:30:29.446 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:29.446 "is_configured": true, 00:30:29.446 "data_offset": 0, 00:30:29.446 "data_size": 65536 00:30:29.446 } 00:30:29.446 ] 00:30:29.446 }' 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.446 07:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.014 [2024-11-20 07:27:54.100862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:30.014 BaseBdev1 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.014 [ 00:30:30.014 { 00:30:30.014 "name": "BaseBdev1", 00:30:30.014 "aliases": [ 00:30:30.014 "b800499c-43bc-46fc-bb7f-91566e2a65d2" 00:30:30.014 ], 00:30:30.014 "product_name": "Malloc disk", 00:30:30.014 "block_size": 512, 00:30:30.014 "num_blocks": 65536, 00:30:30.014 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:30.014 "assigned_rate_limits": { 00:30:30.014 "rw_ios_per_sec": 0, 00:30:30.014 "rw_mbytes_per_sec": 0, 00:30:30.014 "r_mbytes_per_sec": 0, 00:30:30.014 "w_mbytes_per_sec": 0 00:30:30.014 }, 00:30:30.014 "claimed": true, 00:30:30.014 "claim_type": "exclusive_write", 00:30:30.014 "zoned": false, 00:30:30.014 "supported_io_types": { 00:30:30.014 "read": true, 00:30:30.014 "write": true, 00:30:30.014 "unmap": true, 00:30:30.014 "flush": true, 00:30:30.014 "reset": true, 00:30:30.014 "nvme_admin": false, 00:30:30.014 "nvme_io": false, 00:30:30.014 "nvme_io_md": false, 00:30:30.014 "write_zeroes": true, 00:30:30.014 "zcopy": true, 00:30:30.014 "get_zone_info": false, 00:30:30.014 "zone_management": false, 00:30:30.014 "zone_append": false, 00:30:30.014 "compare": false, 00:30:30.014 "compare_and_write": false, 00:30:30.014 "abort": true, 00:30:30.014 "seek_hole": false, 00:30:30.014 "seek_data": false, 00:30:30.014 "copy": true, 00:30:30.014 "nvme_iov_md": false 00:30:30.014 }, 00:30:30.014 "memory_domains": [ 00:30:30.014 { 00:30:30.014 "dma_device_id": "system", 00:30:30.014 "dma_device_type": 1 00:30:30.014 }, 00:30:30.014 { 00:30:30.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.014 "dma_device_type": 2 00:30:30.014 } 00:30:30.014 ], 00:30:30.014 "driver_specific": {} 00:30:30.014 } 00:30:30.014 ] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:30.014 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.015 "name": "Existed_Raid", 00:30:30.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.015 "strip_size_kb": 64, 00:30:30.015 "state": "configuring", 00:30:30.015 "raid_level": "raid5f", 00:30:30.015 "superblock": false, 00:30:30.015 "num_base_bdevs": 4, 00:30:30.015 "num_base_bdevs_discovered": 3, 00:30:30.015 "num_base_bdevs_operational": 4, 00:30:30.015 "base_bdevs_list": [ 00:30:30.015 { 00:30:30.015 "name": "BaseBdev1", 00:30:30.015 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:30.015 "is_configured": true, 00:30:30.015 "data_offset": 0, 00:30:30.015 "data_size": 65536 00:30:30.015 }, 00:30:30.015 { 00:30:30.015 "name": null, 00:30:30.015 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:30.015 "is_configured": false, 00:30:30.015 "data_offset": 0, 00:30:30.015 "data_size": 65536 00:30:30.015 }, 00:30:30.015 { 00:30:30.015 "name": "BaseBdev3", 00:30:30.015 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:30.015 "is_configured": true, 00:30:30.015 "data_offset": 0, 00:30:30.015 "data_size": 65536 00:30:30.015 }, 00:30:30.015 { 00:30:30.015 "name": "BaseBdev4", 00:30:30.015 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:30.015 "is_configured": true, 00:30:30.015 "data_offset": 0, 00:30:30.015 "data_size": 65536 00:30:30.015 } 00:30:30.015 ] 00:30:30.015 }' 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.015 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.583 [2024-11-20 07:27:54.745283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.583 "name": "Existed_Raid", 00:30:30.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.583 "strip_size_kb": 64, 00:30:30.583 "state": "configuring", 00:30:30.583 "raid_level": "raid5f", 00:30:30.583 "superblock": false, 00:30:30.583 "num_base_bdevs": 4, 00:30:30.583 "num_base_bdevs_discovered": 2, 00:30:30.583 "num_base_bdevs_operational": 4, 00:30:30.583 "base_bdevs_list": [ 00:30:30.583 { 00:30:30.583 "name": "BaseBdev1", 00:30:30.583 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:30.583 "is_configured": true, 00:30:30.583 "data_offset": 0, 00:30:30.583 "data_size": 65536 00:30:30.583 }, 00:30:30.583 { 00:30:30.583 "name": null, 00:30:30.583 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:30.583 "is_configured": false, 00:30:30.583 "data_offset": 0, 00:30:30.583 "data_size": 65536 00:30:30.583 }, 00:30:30.583 { 00:30:30.583 "name": null, 00:30:30.583 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:30.583 "is_configured": false, 00:30:30.583 "data_offset": 0, 00:30:30.583 "data_size": 65536 00:30:30.583 }, 00:30:30.583 { 00:30:30.583 "name": "BaseBdev4", 00:30:30.583 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:30.583 "is_configured": true, 00:30:30.583 "data_offset": 0, 00:30:30.583 "data_size": 65536 00:30:30.583 } 00:30:30.583 ] 00:30:30.583 }' 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.583 07:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.151 [2024-11-20 07:27:55.325428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.151 "name": "Existed_Raid", 00:30:31.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.151 "strip_size_kb": 64, 00:30:31.151 "state": "configuring", 00:30:31.151 "raid_level": "raid5f", 00:30:31.151 "superblock": false, 00:30:31.151 "num_base_bdevs": 4, 00:30:31.151 "num_base_bdevs_discovered": 3, 00:30:31.151 "num_base_bdevs_operational": 4, 00:30:31.151 "base_bdevs_list": [ 00:30:31.151 { 00:30:31.151 "name": "BaseBdev1", 00:30:31.151 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:31.151 "is_configured": true, 00:30:31.151 "data_offset": 0, 00:30:31.151 "data_size": 65536 00:30:31.151 }, 00:30:31.151 { 00:30:31.151 "name": null, 00:30:31.151 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:31.151 "is_configured": false, 00:30:31.151 "data_offset": 0, 00:30:31.151 "data_size": 65536 00:30:31.151 }, 00:30:31.151 { 00:30:31.151 "name": "BaseBdev3", 00:30:31.151 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:31.151 "is_configured": true, 00:30:31.151 "data_offset": 0, 00:30:31.151 "data_size": 65536 00:30:31.151 }, 00:30:31.151 { 00:30:31.151 "name": "BaseBdev4", 00:30:31.151 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:31.151 "is_configured": true, 00:30:31.151 "data_offset": 0, 00:30:31.151 "data_size": 65536 00:30:31.151 } 00:30:31.151 ] 00:30:31.151 }' 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.151 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.718 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.719 [2024-11-20 07:27:55.901646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.719 07:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.719 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.977 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.977 "name": "Existed_Raid", 00:30:31.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.977 "strip_size_kb": 64, 00:30:31.977 "state": "configuring", 00:30:31.977 "raid_level": "raid5f", 00:30:31.977 "superblock": false, 00:30:31.977 "num_base_bdevs": 4, 00:30:31.977 "num_base_bdevs_discovered": 2, 00:30:31.977 "num_base_bdevs_operational": 4, 00:30:31.977 "base_bdevs_list": [ 00:30:31.977 { 00:30:31.977 "name": null, 00:30:31.977 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:31.977 "is_configured": false, 00:30:31.977 "data_offset": 0, 00:30:31.977 "data_size": 65536 00:30:31.977 }, 00:30:31.977 { 00:30:31.977 "name": null, 00:30:31.977 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:31.977 "is_configured": false, 00:30:31.977 "data_offset": 0, 00:30:31.977 "data_size": 65536 00:30:31.977 }, 00:30:31.977 { 00:30:31.977 "name": "BaseBdev3", 00:30:31.977 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:31.977 "is_configured": true, 00:30:31.977 "data_offset": 0, 00:30:31.977 "data_size": 65536 00:30:31.977 }, 00:30:31.977 { 00:30:31.977 "name": "BaseBdev4", 00:30:31.977 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:31.977 "is_configured": true, 00:30:31.977 "data_offset": 0, 00:30:31.977 "data_size": 65536 00:30:31.977 } 00:30:31.977 ] 00:30:31.977 }' 00:30:31.977 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.977 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.235 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.235 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.235 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:32.235 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.494 [2024-11-20 07:27:56.565345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.494 "name": "Existed_Raid", 00:30:32.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.494 "strip_size_kb": 64, 00:30:32.494 "state": "configuring", 00:30:32.494 "raid_level": "raid5f", 00:30:32.494 "superblock": false, 00:30:32.494 "num_base_bdevs": 4, 00:30:32.494 "num_base_bdevs_discovered": 3, 00:30:32.494 "num_base_bdevs_operational": 4, 00:30:32.494 "base_bdevs_list": [ 00:30:32.494 { 00:30:32.494 "name": null, 00:30:32.494 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:32.494 "is_configured": false, 00:30:32.494 "data_offset": 0, 00:30:32.494 "data_size": 65536 00:30:32.494 }, 00:30:32.494 { 00:30:32.494 "name": "BaseBdev2", 00:30:32.494 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:32.494 "is_configured": true, 00:30:32.494 "data_offset": 0, 00:30:32.494 "data_size": 65536 00:30:32.494 }, 00:30:32.494 { 00:30:32.494 "name": "BaseBdev3", 00:30:32.494 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:32.494 "is_configured": true, 00:30:32.494 "data_offset": 0, 00:30:32.494 "data_size": 65536 00:30:32.494 }, 00:30:32.494 { 00:30:32.494 "name": "BaseBdev4", 00:30:32.494 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:32.494 "is_configured": true, 00:30:32.494 "data_offset": 0, 00:30:32.494 "data_size": 65536 00:30:32.494 } 00:30:32.494 ] 00:30:32.494 }' 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.494 07:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b800499c-43bc-46fc-bb7f-91566e2a65d2 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 [2024-11-20 07:27:57.239509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:33.061 [2024-11-20 07:27:57.239572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:33.061 [2024-11-20 07:27:57.239584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:33.061 [2024-11-20 07:27:57.240081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:33.061 [2024-11-20 07:27:57.246962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:33.061 [2024-11-20 07:27:57.247018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:33.061 [2024-11-20 07:27:57.247333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:33.061 NewBaseBdev 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 [ 00:30:33.061 { 00:30:33.061 "name": "NewBaseBdev", 00:30:33.061 "aliases": [ 00:30:33.061 "b800499c-43bc-46fc-bb7f-91566e2a65d2" 00:30:33.061 ], 00:30:33.061 "product_name": "Malloc disk", 00:30:33.061 "block_size": 512, 00:30:33.061 "num_blocks": 65536, 00:30:33.061 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:33.061 "assigned_rate_limits": { 00:30:33.061 "rw_ios_per_sec": 0, 00:30:33.061 "rw_mbytes_per_sec": 0, 00:30:33.061 "r_mbytes_per_sec": 0, 00:30:33.061 "w_mbytes_per_sec": 0 00:30:33.061 }, 00:30:33.061 "claimed": true, 00:30:33.061 "claim_type": "exclusive_write", 00:30:33.061 "zoned": false, 00:30:33.061 "supported_io_types": { 00:30:33.061 "read": true, 00:30:33.061 "write": true, 00:30:33.061 "unmap": true, 00:30:33.061 "flush": true, 00:30:33.061 "reset": true, 00:30:33.061 "nvme_admin": false, 00:30:33.061 "nvme_io": false, 00:30:33.061 "nvme_io_md": false, 00:30:33.061 "write_zeroes": true, 00:30:33.061 "zcopy": true, 00:30:33.061 "get_zone_info": false, 00:30:33.061 "zone_management": false, 00:30:33.061 "zone_append": false, 00:30:33.061 "compare": false, 00:30:33.061 "compare_and_write": false, 00:30:33.061 "abort": true, 00:30:33.061 "seek_hole": false, 00:30:33.061 "seek_data": false, 00:30:33.061 "copy": true, 00:30:33.061 "nvme_iov_md": false 00:30:33.061 }, 00:30:33.061 "memory_domains": [ 00:30:33.061 { 00:30:33.061 "dma_device_id": "system", 00:30:33.061 "dma_device_type": 1 00:30:33.061 }, 00:30:33.061 { 00:30:33.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.061 "dma_device_type": 2 00:30:33.061 } 00:30:33.061 ], 00:30:33.061 "driver_specific": {} 00:30:33.061 } 00:30:33.061 ] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.061 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:33.061 "name": "Existed_Raid", 00:30:33.061 "uuid": "74b3d22e-a6b1-42f2-99d2-782b92caa430", 00:30:33.061 "strip_size_kb": 64, 00:30:33.061 "state": "online", 00:30:33.061 "raid_level": "raid5f", 00:30:33.061 "superblock": false, 00:30:33.061 "num_base_bdevs": 4, 00:30:33.061 "num_base_bdevs_discovered": 4, 00:30:33.061 "num_base_bdevs_operational": 4, 00:30:33.061 "base_bdevs_list": [ 00:30:33.061 { 00:30:33.061 "name": "NewBaseBdev", 00:30:33.061 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:33.061 "is_configured": true, 00:30:33.061 "data_offset": 0, 00:30:33.062 "data_size": 65536 00:30:33.062 }, 00:30:33.062 { 00:30:33.062 "name": "BaseBdev2", 00:30:33.062 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:33.062 "is_configured": true, 00:30:33.062 "data_offset": 0, 00:30:33.062 "data_size": 65536 00:30:33.062 }, 00:30:33.062 { 00:30:33.062 "name": "BaseBdev3", 00:30:33.062 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:33.062 "is_configured": true, 00:30:33.062 "data_offset": 0, 00:30:33.062 "data_size": 65536 00:30:33.062 }, 00:30:33.062 { 00:30:33.062 "name": "BaseBdev4", 00:30:33.062 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:33.062 "is_configured": true, 00:30:33.062 "data_offset": 0, 00:30:33.062 "data_size": 65536 00:30:33.062 } 00:30:33.062 ] 00:30:33.062 }' 00:30:33.062 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:33.062 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.628 [2024-11-20 07:27:57.815872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.628 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:33.628 "name": "Existed_Raid", 00:30:33.628 "aliases": [ 00:30:33.628 "74b3d22e-a6b1-42f2-99d2-782b92caa430" 00:30:33.628 ], 00:30:33.628 "product_name": "Raid Volume", 00:30:33.628 "block_size": 512, 00:30:33.628 "num_blocks": 196608, 00:30:33.628 "uuid": "74b3d22e-a6b1-42f2-99d2-782b92caa430", 00:30:33.628 "assigned_rate_limits": { 00:30:33.628 "rw_ios_per_sec": 0, 00:30:33.628 "rw_mbytes_per_sec": 0, 00:30:33.628 "r_mbytes_per_sec": 0, 00:30:33.628 "w_mbytes_per_sec": 0 00:30:33.628 }, 00:30:33.628 "claimed": false, 00:30:33.629 "zoned": false, 00:30:33.629 "supported_io_types": { 00:30:33.629 "read": true, 00:30:33.629 "write": true, 00:30:33.629 "unmap": false, 00:30:33.629 "flush": false, 00:30:33.629 "reset": true, 00:30:33.629 "nvme_admin": false, 00:30:33.629 "nvme_io": false, 00:30:33.629 "nvme_io_md": false, 00:30:33.629 "write_zeroes": true, 00:30:33.629 "zcopy": false, 00:30:33.629 "get_zone_info": false, 00:30:33.629 "zone_management": false, 00:30:33.629 "zone_append": false, 00:30:33.629 "compare": false, 00:30:33.629 "compare_and_write": false, 00:30:33.629 "abort": false, 00:30:33.629 "seek_hole": false, 00:30:33.629 "seek_data": false, 00:30:33.629 "copy": false, 00:30:33.629 "nvme_iov_md": false 00:30:33.629 }, 00:30:33.629 "driver_specific": { 00:30:33.629 "raid": { 00:30:33.629 "uuid": "74b3d22e-a6b1-42f2-99d2-782b92caa430", 00:30:33.629 "strip_size_kb": 64, 00:30:33.629 "state": "online", 00:30:33.629 "raid_level": "raid5f", 00:30:33.629 "superblock": false, 00:30:33.629 "num_base_bdevs": 4, 00:30:33.629 "num_base_bdevs_discovered": 4, 00:30:33.629 "num_base_bdevs_operational": 4, 00:30:33.629 "base_bdevs_list": [ 00:30:33.629 { 00:30:33.629 "name": "NewBaseBdev", 00:30:33.629 "uuid": "b800499c-43bc-46fc-bb7f-91566e2a65d2", 00:30:33.629 "is_configured": true, 00:30:33.629 "data_offset": 0, 00:30:33.629 "data_size": 65536 00:30:33.629 }, 00:30:33.629 { 00:30:33.629 "name": "BaseBdev2", 00:30:33.629 "uuid": "fa664bf9-3892-44c2-8c0c-f75c33f5e252", 00:30:33.629 "is_configured": true, 00:30:33.629 "data_offset": 0, 00:30:33.629 "data_size": 65536 00:30:33.629 }, 00:30:33.629 { 00:30:33.629 "name": "BaseBdev3", 00:30:33.629 "uuid": "7cb4fbfd-15ae-41a9-8e39-2b83db653257", 00:30:33.629 "is_configured": true, 00:30:33.629 "data_offset": 0, 00:30:33.629 "data_size": 65536 00:30:33.629 }, 00:30:33.629 { 00:30:33.629 "name": "BaseBdev4", 00:30:33.629 "uuid": "5348cec4-fa57-4e08-97b5-4d53730ac627", 00:30:33.629 "is_configured": true, 00:30:33.629 "data_offset": 0, 00:30:33.629 "data_size": 65536 00:30:33.629 } 00:30:33.629 ] 00:30:33.629 } 00:30:33.629 } 00:30:33.629 }' 00:30:33.629 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:33.629 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:33.629 BaseBdev2 00:30:33.629 BaseBdev3 00:30:33.629 BaseBdev4' 00:30:33.629 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.887 07:27:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.887 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.888 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.146 [2024-11-20 07:27:58.179582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.146 [2024-11-20 07:27:58.179651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:34.146 [2024-11-20 07:27:58.179998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:34.146 [2024-11-20 07:27:58.180400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:34.146 [2024-11-20 07:27:58.180416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83345 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83345 ']' 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83345 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83345 00:30:34.146 killing process with pid 83345 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83345' 00:30:34.146 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83345 00:30:34.146 [2024-11-20 07:27:58.217768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:34.147 07:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83345 00:30:34.405 [2024-11-20 07:27:58.519486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:35.356 00:30:35.356 real 0m12.817s 00:30:35.356 user 0m21.463s 00:30:35.356 sys 0m1.781s 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.356 ************************************ 00:30:35.356 END TEST raid5f_state_function_test 00:30:35.356 ************************************ 00:30:35.356 07:27:59 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:30:35.356 07:27:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:35.356 07:27:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.356 07:27:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:35.356 ************************************ 00:30:35.356 START TEST raid5f_state_function_test_sb 00:30:35.356 ************************************ 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.356 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84023 00:30:35.357 Process raid pid: 84023 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84023' 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84023 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84023 ']' 00:30:35.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.357 07:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.616 [2024-11-20 07:27:59.647579] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:35.616 [2024-11-20 07:27:59.647800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.616 [2024-11-20 07:27:59.831737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.874 [2024-11-20 07:27:59.957051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.133 [2024-11-20 07:28:00.176861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:36.133 [2024-11-20 07:28:00.176921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.392 [2024-11-20 07:28:00.579174] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:36.392 [2024-11-20 07:28:00.579236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:36.392 [2024-11-20 07:28:00.579253] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:36.392 [2024-11-20 07:28:00.579269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:36.392 [2024-11-20 07:28:00.579279] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:36.392 [2024-11-20 07:28:00.579293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:36.392 [2024-11-20 07:28:00.579302] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:36.392 [2024-11-20 07:28:00.579316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.392 "name": "Existed_Raid", 00:30:36.392 "uuid": "602c4708-07c3-4291-9209-0427c5d1a6ff", 00:30:36.392 "strip_size_kb": 64, 00:30:36.392 "state": "configuring", 00:30:36.392 "raid_level": "raid5f", 00:30:36.392 "superblock": true, 00:30:36.392 "num_base_bdevs": 4, 00:30:36.392 "num_base_bdevs_discovered": 0, 00:30:36.392 "num_base_bdevs_operational": 4, 00:30:36.392 "base_bdevs_list": [ 00:30:36.392 { 00:30:36.392 "name": "BaseBdev1", 00:30:36.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.392 "is_configured": false, 00:30:36.392 "data_offset": 0, 00:30:36.392 "data_size": 0 00:30:36.392 }, 00:30:36.392 { 00:30:36.392 "name": "BaseBdev2", 00:30:36.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.392 "is_configured": false, 00:30:36.392 "data_offset": 0, 00:30:36.392 "data_size": 0 00:30:36.392 }, 00:30:36.392 { 00:30:36.392 "name": "BaseBdev3", 00:30:36.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.392 "is_configured": false, 00:30:36.392 "data_offset": 0, 00:30:36.392 "data_size": 0 00:30:36.392 }, 00:30:36.392 { 00:30:36.392 "name": "BaseBdev4", 00:30:36.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.392 "is_configured": false, 00:30:36.392 "data_offset": 0, 00:30:36.392 "data_size": 0 00:30:36.392 } 00:30:36.392 ] 00:30:36.392 }' 00:30:36.392 07:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.393 07:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.959 [2024-11-20 07:28:01.095259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:36.959 [2024-11-20 07:28:01.095533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.959 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.959 [2024-11-20 07:28:01.103261] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:36.959 [2024-11-20 07:28:01.103323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:36.959 [2024-11-20 07:28:01.103338] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:36.959 [2024-11-20 07:28:01.103354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:36.959 [2024-11-20 07:28:01.103364] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:36.959 [2024-11-20 07:28:01.103378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:36.959 [2024-11-20 07:28:01.103387] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:36.960 [2024-11-20 07:28:01.103401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.960 [2024-11-20 07:28:01.145067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:36.960 BaseBdev1 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.960 [ 00:30:36.960 { 00:30:36.960 "name": "BaseBdev1", 00:30:36.960 "aliases": [ 00:30:36.960 "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f" 00:30:36.960 ], 00:30:36.960 "product_name": "Malloc disk", 00:30:36.960 "block_size": 512, 00:30:36.960 "num_blocks": 65536, 00:30:36.960 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:36.960 "assigned_rate_limits": { 00:30:36.960 "rw_ios_per_sec": 0, 00:30:36.960 "rw_mbytes_per_sec": 0, 00:30:36.960 "r_mbytes_per_sec": 0, 00:30:36.960 "w_mbytes_per_sec": 0 00:30:36.960 }, 00:30:36.960 "claimed": true, 00:30:36.960 "claim_type": "exclusive_write", 00:30:36.960 "zoned": false, 00:30:36.960 "supported_io_types": { 00:30:36.960 "read": true, 00:30:36.960 "write": true, 00:30:36.960 "unmap": true, 00:30:36.960 "flush": true, 00:30:36.960 "reset": true, 00:30:36.960 "nvme_admin": false, 00:30:36.960 "nvme_io": false, 00:30:36.960 "nvme_io_md": false, 00:30:36.960 "write_zeroes": true, 00:30:36.960 "zcopy": true, 00:30:36.960 "get_zone_info": false, 00:30:36.960 "zone_management": false, 00:30:36.960 "zone_append": false, 00:30:36.960 "compare": false, 00:30:36.960 "compare_and_write": false, 00:30:36.960 "abort": true, 00:30:36.960 "seek_hole": false, 00:30:36.960 "seek_data": false, 00:30:36.960 "copy": true, 00:30:36.960 "nvme_iov_md": false 00:30:36.960 }, 00:30:36.960 "memory_domains": [ 00:30:36.960 { 00:30:36.960 "dma_device_id": "system", 00:30:36.960 "dma_device_type": 1 00:30:36.960 }, 00:30:36.960 { 00:30:36.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.960 "dma_device_type": 2 00:30:36.960 } 00:30:36.960 ], 00:30:36.960 "driver_specific": {} 00:30:36.960 } 00:30:36.960 ] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.960 "name": "Existed_Raid", 00:30:36.960 "uuid": "e21bef19-466e-4a9c-a24b-4befc6f1af1b", 00:30:36.960 "strip_size_kb": 64, 00:30:36.960 "state": "configuring", 00:30:36.960 "raid_level": "raid5f", 00:30:36.960 "superblock": true, 00:30:36.960 "num_base_bdevs": 4, 00:30:36.960 "num_base_bdevs_discovered": 1, 00:30:36.960 "num_base_bdevs_operational": 4, 00:30:36.960 "base_bdevs_list": [ 00:30:36.960 { 00:30:36.960 "name": "BaseBdev1", 00:30:36.960 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:36.960 "is_configured": true, 00:30:36.960 "data_offset": 2048, 00:30:36.960 "data_size": 63488 00:30:36.960 }, 00:30:36.960 { 00:30:36.960 "name": "BaseBdev2", 00:30:36.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.960 "is_configured": false, 00:30:36.960 "data_offset": 0, 00:30:36.960 "data_size": 0 00:30:36.960 }, 00:30:36.960 { 00:30:36.960 "name": "BaseBdev3", 00:30:36.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.960 "is_configured": false, 00:30:36.960 "data_offset": 0, 00:30:36.960 "data_size": 0 00:30:36.960 }, 00:30:36.960 { 00:30:36.960 "name": "BaseBdev4", 00:30:36.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.960 "is_configured": false, 00:30:36.960 "data_offset": 0, 00:30:36.960 "data_size": 0 00:30:36.960 } 00:30:36.960 ] 00:30:36.960 }' 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.960 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.526 [2024-11-20 07:28:01.705264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:37.526 [2024-11-20 07:28:01.705340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.526 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 [2024-11-20 07:28:01.713325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.527 [2024-11-20 07:28:01.715996] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:37.527 [2024-11-20 07:28:01.716163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:37.527 [2024-11-20 07:28:01.716286] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:37.527 [2024-11-20 07:28:01.716343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:37.527 [2024-11-20 07:28:01.716560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:37.527 [2024-11-20 07:28:01.716740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.527 "name": "Existed_Raid", 00:30:37.527 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:37.527 "strip_size_kb": 64, 00:30:37.527 "state": "configuring", 00:30:37.527 "raid_level": "raid5f", 00:30:37.527 "superblock": true, 00:30:37.527 "num_base_bdevs": 4, 00:30:37.527 "num_base_bdevs_discovered": 1, 00:30:37.527 "num_base_bdevs_operational": 4, 00:30:37.527 "base_bdevs_list": [ 00:30:37.527 { 00:30:37.527 "name": "BaseBdev1", 00:30:37.527 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:37.527 "is_configured": true, 00:30:37.527 "data_offset": 2048, 00:30:37.527 "data_size": 63488 00:30:37.527 }, 00:30:37.527 { 00:30:37.527 "name": "BaseBdev2", 00:30:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.527 "is_configured": false, 00:30:37.527 "data_offset": 0, 00:30:37.527 "data_size": 0 00:30:37.527 }, 00:30:37.527 { 00:30:37.527 "name": "BaseBdev3", 00:30:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.527 "is_configured": false, 00:30:37.527 "data_offset": 0, 00:30:37.527 "data_size": 0 00:30:37.527 }, 00:30:37.527 { 00:30:37.527 "name": "BaseBdev4", 00:30:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.527 "is_configured": false, 00:30:37.527 "data_offset": 0, 00:30:37.527 "data_size": 0 00:30:37.527 } 00:30:37.527 ] 00:30:37.527 }' 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.527 07:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.095 [2024-11-20 07:28:02.269165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:38.095 BaseBdev2 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.095 [ 00:30:38.095 { 00:30:38.095 "name": "BaseBdev2", 00:30:38.095 "aliases": [ 00:30:38.095 "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9" 00:30:38.095 ], 00:30:38.095 "product_name": "Malloc disk", 00:30:38.095 "block_size": 512, 00:30:38.095 "num_blocks": 65536, 00:30:38.095 "uuid": "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9", 00:30:38.095 "assigned_rate_limits": { 00:30:38.095 "rw_ios_per_sec": 0, 00:30:38.095 "rw_mbytes_per_sec": 0, 00:30:38.095 "r_mbytes_per_sec": 0, 00:30:38.095 "w_mbytes_per_sec": 0 00:30:38.095 }, 00:30:38.095 "claimed": true, 00:30:38.095 "claim_type": "exclusive_write", 00:30:38.095 "zoned": false, 00:30:38.095 "supported_io_types": { 00:30:38.095 "read": true, 00:30:38.095 "write": true, 00:30:38.095 "unmap": true, 00:30:38.095 "flush": true, 00:30:38.095 "reset": true, 00:30:38.095 "nvme_admin": false, 00:30:38.095 "nvme_io": false, 00:30:38.095 "nvme_io_md": false, 00:30:38.095 "write_zeroes": true, 00:30:38.095 "zcopy": true, 00:30:38.095 "get_zone_info": false, 00:30:38.095 "zone_management": false, 00:30:38.095 "zone_append": false, 00:30:38.095 "compare": false, 00:30:38.095 "compare_and_write": false, 00:30:38.095 "abort": true, 00:30:38.095 "seek_hole": false, 00:30:38.095 "seek_data": false, 00:30:38.095 "copy": true, 00:30:38.095 "nvme_iov_md": false 00:30:38.095 }, 00:30:38.095 "memory_domains": [ 00:30:38.095 { 00:30:38.095 "dma_device_id": "system", 00:30:38.095 "dma_device_type": 1 00:30:38.095 }, 00:30:38.095 { 00:30:38.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.095 "dma_device_type": 2 00:30:38.095 } 00:30:38.095 ], 00:30:38.095 "driver_specific": {} 00:30:38.095 } 00:30:38.095 ] 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.095 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.095 "name": "Existed_Raid", 00:30:38.095 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:38.095 "strip_size_kb": 64, 00:30:38.095 "state": "configuring", 00:30:38.095 "raid_level": "raid5f", 00:30:38.095 "superblock": true, 00:30:38.095 "num_base_bdevs": 4, 00:30:38.095 "num_base_bdevs_discovered": 2, 00:30:38.095 "num_base_bdevs_operational": 4, 00:30:38.095 "base_bdevs_list": [ 00:30:38.095 { 00:30:38.095 "name": "BaseBdev1", 00:30:38.096 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:38.096 "is_configured": true, 00:30:38.096 "data_offset": 2048, 00:30:38.096 "data_size": 63488 00:30:38.096 }, 00:30:38.096 { 00:30:38.096 "name": "BaseBdev2", 00:30:38.096 "uuid": "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9", 00:30:38.096 "is_configured": true, 00:30:38.096 "data_offset": 2048, 00:30:38.096 "data_size": 63488 00:30:38.096 }, 00:30:38.096 { 00:30:38.096 "name": "BaseBdev3", 00:30:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.096 "is_configured": false, 00:30:38.096 "data_offset": 0, 00:30:38.096 "data_size": 0 00:30:38.096 }, 00:30:38.096 { 00:30:38.096 "name": "BaseBdev4", 00:30:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.096 "is_configured": false, 00:30:38.096 "data_offset": 0, 00:30:38.096 "data_size": 0 00:30:38.096 } 00:30:38.096 ] 00:30:38.096 }' 00:30:38.096 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.096 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.663 [2024-11-20 07:28:02.923811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:38.663 BaseBdev3 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.663 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.663 [ 00:30:38.663 { 00:30:38.663 "name": "BaseBdev3", 00:30:38.663 "aliases": [ 00:30:38.663 "ddbc8dae-ca68-497a-b7a5-69dedef1183e" 00:30:38.663 ], 00:30:38.663 "product_name": "Malloc disk", 00:30:38.663 "block_size": 512, 00:30:38.663 "num_blocks": 65536, 00:30:38.663 "uuid": "ddbc8dae-ca68-497a-b7a5-69dedef1183e", 00:30:38.663 "assigned_rate_limits": { 00:30:38.663 "rw_ios_per_sec": 0, 00:30:38.663 "rw_mbytes_per_sec": 0, 00:30:38.663 "r_mbytes_per_sec": 0, 00:30:38.664 "w_mbytes_per_sec": 0 00:30:38.664 }, 00:30:38.664 "claimed": true, 00:30:38.664 "claim_type": "exclusive_write", 00:30:38.664 "zoned": false, 00:30:38.664 "supported_io_types": { 00:30:38.664 "read": true, 00:30:38.664 "write": true, 00:30:38.664 "unmap": true, 00:30:38.664 "flush": true, 00:30:38.664 "reset": true, 00:30:38.664 "nvme_admin": false, 00:30:38.922 "nvme_io": false, 00:30:38.922 "nvme_io_md": false, 00:30:38.922 "write_zeroes": true, 00:30:38.922 "zcopy": true, 00:30:38.922 "get_zone_info": false, 00:30:38.922 "zone_management": false, 00:30:38.922 "zone_append": false, 00:30:38.922 "compare": false, 00:30:38.923 "compare_and_write": false, 00:30:38.923 "abort": true, 00:30:38.923 "seek_hole": false, 00:30:38.923 "seek_data": false, 00:30:38.923 "copy": true, 00:30:38.923 "nvme_iov_md": false 00:30:38.923 }, 00:30:38.923 "memory_domains": [ 00:30:38.923 { 00:30:38.923 "dma_device_id": "system", 00:30:38.923 "dma_device_type": 1 00:30:38.923 }, 00:30:38.923 { 00:30:38.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.923 "dma_device_type": 2 00:30:38.923 } 00:30:38.923 ], 00:30:38.923 "driver_specific": {} 00:30:38.923 } 00:30:38.923 ] 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.923 07:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.923 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.923 "name": "Existed_Raid", 00:30:38.923 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:38.923 "strip_size_kb": 64, 00:30:38.923 "state": "configuring", 00:30:38.923 "raid_level": "raid5f", 00:30:38.923 "superblock": true, 00:30:38.923 "num_base_bdevs": 4, 00:30:38.923 "num_base_bdevs_discovered": 3, 00:30:38.923 "num_base_bdevs_operational": 4, 00:30:38.923 "base_bdevs_list": [ 00:30:38.923 { 00:30:38.923 "name": "BaseBdev1", 00:30:38.923 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:38.923 "is_configured": true, 00:30:38.923 "data_offset": 2048, 00:30:38.923 "data_size": 63488 00:30:38.923 }, 00:30:38.923 { 00:30:38.923 "name": "BaseBdev2", 00:30:38.923 "uuid": "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9", 00:30:38.923 "is_configured": true, 00:30:38.923 "data_offset": 2048, 00:30:38.923 "data_size": 63488 00:30:38.923 }, 00:30:38.923 { 00:30:38.923 "name": "BaseBdev3", 00:30:38.923 "uuid": "ddbc8dae-ca68-497a-b7a5-69dedef1183e", 00:30:38.923 "is_configured": true, 00:30:38.923 "data_offset": 2048, 00:30:38.923 "data_size": 63488 00:30:38.923 }, 00:30:38.923 { 00:30:38.923 "name": "BaseBdev4", 00:30:38.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.923 "is_configured": false, 00:30:38.923 "data_offset": 0, 00:30:38.923 "data_size": 0 00:30:38.923 } 00:30:38.923 ] 00:30:38.923 }' 00:30:38.923 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.923 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.181 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:39.181 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.181 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.440 [2024-11-20 07:28:03.506947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:39.440 [2024-11-20 07:28:03.507310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:39.440 [2024-11-20 07:28:03.507360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:39.440 BaseBdev4 00:30:39.440 [2024-11-20 07:28:03.507694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.440 [2024-11-20 07:28:03.514190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:39.440 [2024-11-20 07:28:03.514233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:39.440 [2024-11-20 07:28:03.514478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.440 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.440 [ 00:30:39.440 { 00:30:39.440 "name": "BaseBdev4", 00:30:39.440 "aliases": [ 00:30:39.440 "33e0ad55-4c75-4ad6-951f-596325c6c1e7" 00:30:39.440 ], 00:30:39.440 "product_name": "Malloc disk", 00:30:39.441 "block_size": 512, 00:30:39.441 "num_blocks": 65536, 00:30:39.441 "uuid": "33e0ad55-4c75-4ad6-951f-596325c6c1e7", 00:30:39.441 "assigned_rate_limits": { 00:30:39.441 "rw_ios_per_sec": 0, 00:30:39.441 "rw_mbytes_per_sec": 0, 00:30:39.441 "r_mbytes_per_sec": 0, 00:30:39.441 "w_mbytes_per_sec": 0 00:30:39.441 }, 00:30:39.441 "claimed": true, 00:30:39.441 "claim_type": "exclusive_write", 00:30:39.441 "zoned": false, 00:30:39.441 "supported_io_types": { 00:30:39.441 "read": true, 00:30:39.441 "write": true, 00:30:39.441 "unmap": true, 00:30:39.441 "flush": true, 00:30:39.441 "reset": true, 00:30:39.441 "nvme_admin": false, 00:30:39.441 "nvme_io": false, 00:30:39.441 "nvme_io_md": false, 00:30:39.441 "write_zeroes": true, 00:30:39.441 "zcopy": true, 00:30:39.441 "get_zone_info": false, 00:30:39.441 "zone_management": false, 00:30:39.441 "zone_append": false, 00:30:39.441 "compare": false, 00:30:39.441 "compare_and_write": false, 00:30:39.441 "abort": true, 00:30:39.441 "seek_hole": false, 00:30:39.441 "seek_data": false, 00:30:39.441 "copy": true, 00:30:39.441 "nvme_iov_md": false 00:30:39.441 }, 00:30:39.441 "memory_domains": [ 00:30:39.441 { 00:30:39.441 "dma_device_id": "system", 00:30:39.441 "dma_device_type": 1 00:30:39.441 }, 00:30:39.441 { 00:30:39.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.441 "dma_device_type": 2 00:30:39.441 } 00:30:39.441 ], 00:30:39.441 "driver_specific": {} 00:30:39.441 } 00:30:39.441 ] 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.441 "name": "Existed_Raid", 00:30:39.441 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:39.441 "strip_size_kb": 64, 00:30:39.441 "state": "online", 00:30:39.441 "raid_level": "raid5f", 00:30:39.441 "superblock": true, 00:30:39.441 "num_base_bdevs": 4, 00:30:39.441 "num_base_bdevs_discovered": 4, 00:30:39.441 "num_base_bdevs_operational": 4, 00:30:39.441 "base_bdevs_list": [ 00:30:39.441 { 00:30:39.441 "name": "BaseBdev1", 00:30:39.441 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:39.441 "is_configured": true, 00:30:39.441 "data_offset": 2048, 00:30:39.441 "data_size": 63488 00:30:39.441 }, 00:30:39.441 { 00:30:39.441 "name": "BaseBdev2", 00:30:39.441 "uuid": "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9", 00:30:39.441 "is_configured": true, 00:30:39.441 "data_offset": 2048, 00:30:39.441 "data_size": 63488 00:30:39.441 }, 00:30:39.441 { 00:30:39.441 "name": "BaseBdev3", 00:30:39.441 "uuid": "ddbc8dae-ca68-497a-b7a5-69dedef1183e", 00:30:39.441 "is_configured": true, 00:30:39.441 "data_offset": 2048, 00:30:39.441 "data_size": 63488 00:30:39.441 }, 00:30:39.441 { 00:30:39.441 "name": "BaseBdev4", 00:30:39.441 "uuid": "33e0ad55-4c75-4ad6-951f-596325c6c1e7", 00:30:39.441 "is_configured": true, 00:30:39.441 "data_offset": 2048, 00:30:39.441 "data_size": 63488 00:30:39.441 } 00:30:39.441 ] 00:30:39.441 }' 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.441 07:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.009 [2024-11-20 07:28:04.073764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:40.009 "name": "Existed_Raid", 00:30:40.009 "aliases": [ 00:30:40.009 "3d26e57e-245f-457a-bdcd-4330019e7788" 00:30:40.009 ], 00:30:40.009 "product_name": "Raid Volume", 00:30:40.009 "block_size": 512, 00:30:40.009 "num_blocks": 190464, 00:30:40.009 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:40.009 "assigned_rate_limits": { 00:30:40.009 "rw_ios_per_sec": 0, 00:30:40.009 "rw_mbytes_per_sec": 0, 00:30:40.009 "r_mbytes_per_sec": 0, 00:30:40.009 "w_mbytes_per_sec": 0 00:30:40.009 }, 00:30:40.009 "claimed": false, 00:30:40.009 "zoned": false, 00:30:40.009 "supported_io_types": { 00:30:40.009 "read": true, 00:30:40.009 "write": true, 00:30:40.009 "unmap": false, 00:30:40.009 "flush": false, 00:30:40.009 "reset": true, 00:30:40.009 "nvme_admin": false, 00:30:40.009 "nvme_io": false, 00:30:40.009 "nvme_io_md": false, 00:30:40.009 "write_zeroes": true, 00:30:40.009 "zcopy": false, 00:30:40.009 "get_zone_info": false, 00:30:40.009 "zone_management": false, 00:30:40.009 "zone_append": false, 00:30:40.009 "compare": false, 00:30:40.009 "compare_and_write": false, 00:30:40.009 "abort": false, 00:30:40.009 "seek_hole": false, 00:30:40.009 "seek_data": false, 00:30:40.009 "copy": false, 00:30:40.009 "nvme_iov_md": false 00:30:40.009 }, 00:30:40.009 "driver_specific": { 00:30:40.009 "raid": { 00:30:40.009 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:40.009 "strip_size_kb": 64, 00:30:40.009 "state": "online", 00:30:40.009 "raid_level": "raid5f", 00:30:40.009 "superblock": true, 00:30:40.009 "num_base_bdevs": 4, 00:30:40.009 "num_base_bdevs_discovered": 4, 00:30:40.009 "num_base_bdevs_operational": 4, 00:30:40.009 "base_bdevs_list": [ 00:30:40.009 { 00:30:40.009 "name": "BaseBdev1", 00:30:40.009 "uuid": "ac1a0e99-0c2a-4387-935c-1d5dc41aff2f", 00:30:40.009 "is_configured": true, 00:30:40.009 "data_offset": 2048, 00:30:40.009 "data_size": 63488 00:30:40.009 }, 00:30:40.009 { 00:30:40.009 "name": "BaseBdev2", 00:30:40.009 "uuid": "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9", 00:30:40.009 "is_configured": true, 00:30:40.009 "data_offset": 2048, 00:30:40.009 "data_size": 63488 00:30:40.009 }, 00:30:40.009 { 00:30:40.009 "name": "BaseBdev3", 00:30:40.009 "uuid": "ddbc8dae-ca68-497a-b7a5-69dedef1183e", 00:30:40.009 "is_configured": true, 00:30:40.009 "data_offset": 2048, 00:30:40.009 "data_size": 63488 00:30:40.009 }, 00:30:40.009 { 00:30:40.009 "name": "BaseBdev4", 00:30:40.009 "uuid": "33e0ad55-4c75-4ad6-951f-596325c6c1e7", 00:30:40.009 "is_configured": true, 00:30:40.009 "data_offset": 2048, 00:30:40.009 "data_size": 63488 00:30:40.009 } 00:30:40.009 ] 00:30:40.009 } 00:30:40.009 } 00:30:40.009 }' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:40.009 BaseBdev2 00:30:40.009 BaseBdev3 00:30:40.009 BaseBdev4' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.009 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.268 [2024-11-20 07:28:04.445682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.268 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.269 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.527 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.527 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.527 "name": "Existed_Raid", 00:30:40.527 "uuid": "3d26e57e-245f-457a-bdcd-4330019e7788", 00:30:40.527 "strip_size_kb": 64, 00:30:40.527 "state": "online", 00:30:40.527 "raid_level": "raid5f", 00:30:40.527 "superblock": true, 00:30:40.527 "num_base_bdevs": 4, 00:30:40.527 "num_base_bdevs_discovered": 3, 00:30:40.527 "num_base_bdevs_operational": 3, 00:30:40.527 "base_bdevs_list": [ 00:30:40.527 { 00:30:40.527 "name": null, 00:30:40.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.527 "is_configured": false, 00:30:40.527 "data_offset": 0, 00:30:40.527 "data_size": 63488 00:30:40.527 }, 00:30:40.527 { 00:30:40.527 "name": "BaseBdev2", 00:30:40.527 "uuid": "7292cfbb-8525-4c61-a8d0-3d2d62ab70f9", 00:30:40.527 "is_configured": true, 00:30:40.527 "data_offset": 2048, 00:30:40.527 "data_size": 63488 00:30:40.527 }, 00:30:40.527 { 00:30:40.527 "name": "BaseBdev3", 00:30:40.527 "uuid": "ddbc8dae-ca68-497a-b7a5-69dedef1183e", 00:30:40.527 "is_configured": true, 00:30:40.527 "data_offset": 2048, 00:30:40.527 "data_size": 63488 00:30:40.527 }, 00:30:40.527 { 00:30:40.527 "name": "BaseBdev4", 00:30:40.527 "uuid": "33e0ad55-4c75-4ad6-951f-596325c6c1e7", 00:30:40.527 "is_configured": true, 00:30:40.527 "data_offset": 2048, 00:30:40.528 "data_size": 63488 00:30:40.528 } 00:30:40.528 ] 00:30:40.528 }' 00:30:40.528 07:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.528 07:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.786 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:40.787 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.045 [2024-11-20 07:28:05.132588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:41.045 [2024-11-20 07:28:05.132987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:41.045 [2024-11-20 07:28:05.227376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.045 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.045 [2024-11-20 07:28:05.287425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.304 [2024-11-20 07:28:05.442266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:41.304 [2024-11-20 07:28:05.442335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:41.304 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.563 BaseBdev2 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.563 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 [ 00:30:41.564 { 00:30:41.564 "name": "BaseBdev2", 00:30:41.564 "aliases": [ 00:30:41.564 "91ca7174-412d-445f-b70d-defcbf647d67" 00:30:41.564 ], 00:30:41.564 "product_name": "Malloc disk", 00:30:41.564 "block_size": 512, 00:30:41.564 "num_blocks": 65536, 00:30:41.564 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:41.564 "assigned_rate_limits": { 00:30:41.564 "rw_ios_per_sec": 0, 00:30:41.564 "rw_mbytes_per_sec": 0, 00:30:41.564 "r_mbytes_per_sec": 0, 00:30:41.564 "w_mbytes_per_sec": 0 00:30:41.564 }, 00:30:41.564 "claimed": false, 00:30:41.564 "zoned": false, 00:30:41.564 "supported_io_types": { 00:30:41.564 "read": true, 00:30:41.564 "write": true, 00:30:41.564 "unmap": true, 00:30:41.564 "flush": true, 00:30:41.564 "reset": true, 00:30:41.564 "nvme_admin": false, 00:30:41.564 "nvme_io": false, 00:30:41.564 "nvme_io_md": false, 00:30:41.564 "write_zeroes": true, 00:30:41.564 "zcopy": true, 00:30:41.564 "get_zone_info": false, 00:30:41.564 "zone_management": false, 00:30:41.564 "zone_append": false, 00:30:41.564 "compare": false, 00:30:41.564 "compare_and_write": false, 00:30:41.564 "abort": true, 00:30:41.564 "seek_hole": false, 00:30:41.564 "seek_data": false, 00:30:41.564 "copy": true, 00:30:41.564 "nvme_iov_md": false 00:30:41.564 }, 00:30:41.564 "memory_domains": [ 00:30:41.564 { 00:30:41.564 "dma_device_id": "system", 00:30:41.564 "dma_device_type": 1 00:30:41.564 }, 00:30:41.564 { 00:30:41.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.564 "dma_device_type": 2 00:30:41.564 } 00:30:41.564 ], 00:30:41.564 "driver_specific": {} 00:30:41.564 } 00:30:41.564 ] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 BaseBdev3 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 [ 00:30:41.564 { 00:30:41.564 "name": "BaseBdev3", 00:30:41.564 "aliases": [ 00:30:41.564 "b37697f2-b377-4ebc-9b18-4cdbcb111685" 00:30:41.564 ], 00:30:41.564 "product_name": "Malloc disk", 00:30:41.564 "block_size": 512, 00:30:41.564 "num_blocks": 65536, 00:30:41.564 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:41.564 "assigned_rate_limits": { 00:30:41.564 "rw_ios_per_sec": 0, 00:30:41.564 "rw_mbytes_per_sec": 0, 00:30:41.564 "r_mbytes_per_sec": 0, 00:30:41.564 "w_mbytes_per_sec": 0 00:30:41.564 }, 00:30:41.564 "claimed": false, 00:30:41.564 "zoned": false, 00:30:41.564 "supported_io_types": { 00:30:41.564 "read": true, 00:30:41.564 "write": true, 00:30:41.564 "unmap": true, 00:30:41.564 "flush": true, 00:30:41.564 "reset": true, 00:30:41.564 "nvme_admin": false, 00:30:41.564 "nvme_io": false, 00:30:41.564 "nvme_io_md": false, 00:30:41.564 "write_zeroes": true, 00:30:41.564 "zcopy": true, 00:30:41.564 "get_zone_info": false, 00:30:41.564 "zone_management": false, 00:30:41.564 "zone_append": false, 00:30:41.564 "compare": false, 00:30:41.564 "compare_and_write": false, 00:30:41.564 "abort": true, 00:30:41.564 "seek_hole": false, 00:30:41.564 "seek_data": false, 00:30:41.564 "copy": true, 00:30:41.564 "nvme_iov_md": false 00:30:41.564 }, 00:30:41.564 "memory_domains": [ 00:30:41.564 { 00:30:41.564 "dma_device_id": "system", 00:30:41.564 "dma_device_type": 1 00:30:41.564 }, 00:30:41.564 { 00:30:41.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.564 "dma_device_type": 2 00:30:41.564 } 00:30:41.564 ], 00:30:41.564 "driver_specific": {} 00:30:41.564 } 00:30:41.564 ] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 BaseBdev4 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.564 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.564 [ 00:30:41.564 { 00:30:41.564 "name": "BaseBdev4", 00:30:41.564 "aliases": [ 00:30:41.564 "343b7ae2-8851-4b69-955d-8d700b28c55d" 00:30:41.564 ], 00:30:41.564 "product_name": "Malloc disk", 00:30:41.564 "block_size": 512, 00:30:41.564 "num_blocks": 65536, 00:30:41.564 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:41.564 "assigned_rate_limits": { 00:30:41.564 "rw_ios_per_sec": 0, 00:30:41.564 "rw_mbytes_per_sec": 0, 00:30:41.564 "r_mbytes_per_sec": 0, 00:30:41.564 "w_mbytes_per_sec": 0 00:30:41.564 }, 00:30:41.564 "claimed": false, 00:30:41.564 "zoned": false, 00:30:41.564 "supported_io_types": { 00:30:41.564 "read": true, 00:30:41.564 "write": true, 00:30:41.564 "unmap": true, 00:30:41.564 "flush": true, 00:30:41.564 "reset": true, 00:30:41.564 "nvme_admin": false, 00:30:41.564 "nvme_io": false, 00:30:41.564 "nvme_io_md": false, 00:30:41.564 "write_zeroes": true, 00:30:41.564 "zcopy": true, 00:30:41.564 "get_zone_info": false, 00:30:41.564 "zone_management": false, 00:30:41.564 "zone_append": false, 00:30:41.564 "compare": false, 00:30:41.564 "compare_and_write": false, 00:30:41.564 "abort": true, 00:30:41.564 "seek_hole": false, 00:30:41.564 "seek_data": false, 00:30:41.564 "copy": true, 00:30:41.564 "nvme_iov_md": false 00:30:41.564 }, 00:30:41.564 "memory_domains": [ 00:30:41.564 { 00:30:41.564 "dma_device_id": "system", 00:30:41.564 "dma_device_type": 1 00:30:41.564 }, 00:30:41.564 { 00:30:41.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.564 "dma_device_type": 2 00:30:41.564 } 00:30:41.564 ], 00:30:41.564 "driver_specific": {} 00:30:41.564 } 00:30:41.564 ] 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.565 [2024-11-20 07:28:05.838165] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:41.565 [2024-11-20 07:28:05.838218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:41.565 [2024-11-20 07:28:05.838250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:41.565 [2024-11-20 07:28:05.840856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:41.565 [2024-11-20 07:28:05.840926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.565 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.824 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.824 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.824 "name": "Existed_Raid", 00:30:41.824 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:41.824 "strip_size_kb": 64, 00:30:41.824 "state": "configuring", 00:30:41.824 "raid_level": "raid5f", 00:30:41.824 "superblock": true, 00:30:41.824 "num_base_bdevs": 4, 00:30:41.824 "num_base_bdevs_discovered": 3, 00:30:41.824 "num_base_bdevs_operational": 4, 00:30:41.824 "base_bdevs_list": [ 00:30:41.824 { 00:30:41.824 "name": "BaseBdev1", 00:30:41.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.824 "is_configured": false, 00:30:41.824 "data_offset": 0, 00:30:41.824 "data_size": 0 00:30:41.824 }, 00:30:41.824 { 00:30:41.824 "name": "BaseBdev2", 00:30:41.824 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:41.824 "is_configured": true, 00:30:41.824 "data_offset": 2048, 00:30:41.824 "data_size": 63488 00:30:41.824 }, 00:30:41.824 { 00:30:41.824 "name": "BaseBdev3", 00:30:41.824 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:41.824 "is_configured": true, 00:30:41.824 "data_offset": 2048, 00:30:41.824 "data_size": 63488 00:30:41.824 }, 00:30:41.824 { 00:30:41.824 "name": "BaseBdev4", 00:30:41.824 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:41.824 "is_configured": true, 00:30:41.824 "data_offset": 2048, 00:30:41.824 "data_size": 63488 00:30:41.824 } 00:30:41.824 ] 00:30:41.824 }' 00:30:41.824 07:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.824 07:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.082 [2024-11-20 07:28:06.358475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:42.082 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:42.341 "name": "Existed_Raid", 00:30:42.341 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:42.341 "strip_size_kb": 64, 00:30:42.341 "state": "configuring", 00:30:42.341 "raid_level": "raid5f", 00:30:42.341 "superblock": true, 00:30:42.341 "num_base_bdevs": 4, 00:30:42.341 "num_base_bdevs_discovered": 2, 00:30:42.341 "num_base_bdevs_operational": 4, 00:30:42.341 "base_bdevs_list": [ 00:30:42.341 { 00:30:42.341 "name": "BaseBdev1", 00:30:42.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.341 "is_configured": false, 00:30:42.341 "data_offset": 0, 00:30:42.341 "data_size": 0 00:30:42.341 }, 00:30:42.341 { 00:30:42.341 "name": null, 00:30:42.341 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:42.341 "is_configured": false, 00:30:42.341 "data_offset": 0, 00:30:42.341 "data_size": 63488 00:30:42.341 }, 00:30:42.341 { 00:30:42.341 "name": "BaseBdev3", 00:30:42.341 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:42.341 "is_configured": true, 00:30:42.341 "data_offset": 2048, 00:30:42.341 "data_size": 63488 00:30:42.341 }, 00:30:42.341 { 00:30:42.341 "name": "BaseBdev4", 00:30:42.341 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:42.341 "is_configured": true, 00:30:42.341 "data_offset": 2048, 00:30:42.341 "data_size": 63488 00:30:42.341 } 00:30:42.341 ] 00:30:42.341 }' 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:42.341 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.909 07:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.909 [2024-11-20 07:28:07.013532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:42.909 BaseBdev1 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.909 [ 00:30:42.909 { 00:30:42.909 "name": "BaseBdev1", 00:30:42.909 "aliases": [ 00:30:42.909 "9a7ff74f-22ff-469d-be23-37539251b085" 00:30:42.909 ], 00:30:42.909 "product_name": "Malloc disk", 00:30:42.909 "block_size": 512, 00:30:42.909 "num_blocks": 65536, 00:30:42.909 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:42.909 "assigned_rate_limits": { 00:30:42.909 "rw_ios_per_sec": 0, 00:30:42.909 "rw_mbytes_per_sec": 0, 00:30:42.909 "r_mbytes_per_sec": 0, 00:30:42.909 "w_mbytes_per_sec": 0 00:30:42.909 }, 00:30:42.909 "claimed": true, 00:30:42.909 "claim_type": "exclusive_write", 00:30:42.909 "zoned": false, 00:30:42.909 "supported_io_types": { 00:30:42.909 "read": true, 00:30:42.909 "write": true, 00:30:42.909 "unmap": true, 00:30:42.909 "flush": true, 00:30:42.909 "reset": true, 00:30:42.909 "nvme_admin": false, 00:30:42.909 "nvme_io": false, 00:30:42.909 "nvme_io_md": false, 00:30:42.909 "write_zeroes": true, 00:30:42.909 "zcopy": true, 00:30:42.909 "get_zone_info": false, 00:30:42.909 "zone_management": false, 00:30:42.909 "zone_append": false, 00:30:42.909 "compare": false, 00:30:42.909 "compare_and_write": false, 00:30:42.909 "abort": true, 00:30:42.909 "seek_hole": false, 00:30:42.909 "seek_data": false, 00:30:42.909 "copy": true, 00:30:42.909 "nvme_iov_md": false 00:30:42.909 }, 00:30:42.909 "memory_domains": [ 00:30:42.909 { 00:30:42.909 "dma_device_id": "system", 00:30:42.909 "dma_device_type": 1 00:30:42.909 }, 00:30:42.909 { 00:30:42.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.909 "dma_device_type": 2 00:30:42.909 } 00:30:42.909 ], 00:30:42.909 "driver_specific": {} 00:30:42.909 } 00:30:42.909 ] 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:42.909 "name": "Existed_Raid", 00:30:42.909 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:42.909 "strip_size_kb": 64, 00:30:42.909 "state": "configuring", 00:30:42.909 "raid_level": "raid5f", 00:30:42.909 "superblock": true, 00:30:42.909 "num_base_bdevs": 4, 00:30:42.909 "num_base_bdevs_discovered": 3, 00:30:42.909 "num_base_bdevs_operational": 4, 00:30:42.909 "base_bdevs_list": [ 00:30:42.909 { 00:30:42.909 "name": "BaseBdev1", 00:30:42.909 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:42.909 "is_configured": true, 00:30:42.909 "data_offset": 2048, 00:30:42.909 "data_size": 63488 00:30:42.909 }, 00:30:42.909 { 00:30:42.909 "name": null, 00:30:42.909 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:42.909 "is_configured": false, 00:30:42.909 "data_offset": 0, 00:30:42.909 "data_size": 63488 00:30:42.909 }, 00:30:42.909 { 00:30:42.909 "name": "BaseBdev3", 00:30:42.909 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:42.909 "is_configured": true, 00:30:42.909 "data_offset": 2048, 00:30:42.909 "data_size": 63488 00:30:42.909 }, 00:30:42.909 { 00:30:42.909 "name": "BaseBdev4", 00:30:42.909 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:42.909 "is_configured": true, 00:30:42.909 "data_offset": 2048, 00:30:42.909 "data_size": 63488 00:30:42.909 } 00:30:42.909 ] 00:30:42.909 }' 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:42.909 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.478 [2024-11-20 07:28:07.633958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.478 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:43.478 "name": "Existed_Raid", 00:30:43.478 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:43.478 "strip_size_kb": 64, 00:30:43.478 "state": "configuring", 00:30:43.478 "raid_level": "raid5f", 00:30:43.478 "superblock": true, 00:30:43.478 "num_base_bdevs": 4, 00:30:43.478 "num_base_bdevs_discovered": 2, 00:30:43.478 "num_base_bdevs_operational": 4, 00:30:43.478 "base_bdevs_list": [ 00:30:43.478 { 00:30:43.478 "name": "BaseBdev1", 00:30:43.478 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:43.478 "is_configured": true, 00:30:43.478 "data_offset": 2048, 00:30:43.478 "data_size": 63488 00:30:43.478 }, 00:30:43.478 { 00:30:43.478 "name": null, 00:30:43.478 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:43.478 "is_configured": false, 00:30:43.478 "data_offset": 0, 00:30:43.478 "data_size": 63488 00:30:43.478 }, 00:30:43.479 { 00:30:43.479 "name": null, 00:30:43.479 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:43.479 "is_configured": false, 00:30:43.479 "data_offset": 0, 00:30:43.479 "data_size": 63488 00:30:43.479 }, 00:30:43.479 { 00:30:43.479 "name": "BaseBdev4", 00:30:43.479 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:43.479 "is_configured": true, 00:30:43.479 "data_offset": 2048, 00:30:43.479 "data_size": 63488 00:30:43.479 } 00:30:43.479 ] 00:30:43.479 }' 00:30:43.479 07:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:43.479 07:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.047 [2024-11-20 07:28:08.222093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.047 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.047 "name": "Existed_Raid", 00:30:44.047 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:44.047 "strip_size_kb": 64, 00:30:44.047 "state": "configuring", 00:30:44.048 "raid_level": "raid5f", 00:30:44.048 "superblock": true, 00:30:44.048 "num_base_bdevs": 4, 00:30:44.048 "num_base_bdevs_discovered": 3, 00:30:44.048 "num_base_bdevs_operational": 4, 00:30:44.048 "base_bdevs_list": [ 00:30:44.048 { 00:30:44.048 "name": "BaseBdev1", 00:30:44.048 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:44.048 "is_configured": true, 00:30:44.048 "data_offset": 2048, 00:30:44.048 "data_size": 63488 00:30:44.048 }, 00:30:44.048 { 00:30:44.048 "name": null, 00:30:44.048 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:44.048 "is_configured": false, 00:30:44.048 "data_offset": 0, 00:30:44.048 "data_size": 63488 00:30:44.048 }, 00:30:44.048 { 00:30:44.048 "name": "BaseBdev3", 00:30:44.048 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:44.048 "is_configured": true, 00:30:44.048 "data_offset": 2048, 00:30:44.048 "data_size": 63488 00:30:44.048 }, 00:30:44.048 { 00:30:44.048 "name": "BaseBdev4", 00:30:44.048 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:44.048 "is_configured": true, 00:30:44.048 "data_offset": 2048, 00:30:44.048 "data_size": 63488 00:30:44.048 } 00:30:44.048 ] 00:30:44.048 }' 00:30:44.048 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.048 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.616 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.616 [2024-11-20 07:28:08.818433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.876 "name": "Existed_Raid", 00:30:44.876 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:44.876 "strip_size_kb": 64, 00:30:44.876 "state": "configuring", 00:30:44.876 "raid_level": "raid5f", 00:30:44.876 "superblock": true, 00:30:44.876 "num_base_bdevs": 4, 00:30:44.876 "num_base_bdevs_discovered": 2, 00:30:44.876 "num_base_bdevs_operational": 4, 00:30:44.876 "base_bdevs_list": [ 00:30:44.876 { 00:30:44.876 "name": null, 00:30:44.876 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:44.876 "is_configured": false, 00:30:44.876 "data_offset": 0, 00:30:44.876 "data_size": 63488 00:30:44.876 }, 00:30:44.876 { 00:30:44.876 "name": null, 00:30:44.876 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:44.876 "is_configured": false, 00:30:44.876 "data_offset": 0, 00:30:44.876 "data_size": 63488 00:30:44.876 }, 00:30:44.876 { 00:30:44.876 "name": "BaseBdev3", 00:30:44.876 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:44.876 "is_configured": true, 00:30:44.876 "data_offset": 2048, 00:30:44.876 "data_size": 63488 00:30:44.876 }, 00:30:44.876 { 00:30:44.876 "name": "BaseBdev4", 00:30:44.876 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:44.876 "is_configured": true, 00:30:44.876 "data_offset": 2048, 00:30:44.876 "data_size": 63488 00:30:44.876 } 00:30:44.876 ] 00:30:44.876 }' 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.876 07:28:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.443 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.444 [2024-11-20 07:28:09.493003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:45.444 "name": "Existed_Raid", 00:30:45.444 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:45.444 "strip_size_kb": 64, 00:30:45.444 "state": "configuring", 00:30:45.444 "raid_level": "raid5f", 00:30:45.444 "superblock": true, 00:30:45.444 "num_base_bdevs": 4, 00:30:45.444 "num_base_bdevs_discovered": 3, 00:30:45.444 "num_base_bdevs_operational": 4, 00:30:45.444 "base_bdevs_list": [ 00:30:45.444 { 00:30:45.444 "name": null, 00:30:45.444 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:45.444 "is_configured": false, 00:30:45.444 "data_offset": 0, 00:30:45.444 "data_size": 63488 00:30:45.444 }, 00:30:45.444 { 00:30:45.444 "name": "BaseBdev2", 00:30:45.444 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:45.444 "is_configured": true, 00:30:45.444 "data_offset": 2048, 00:30:45.444 "data_size": 63488 00:30:45.444 }, 00:30:45.444 { 00:30:45.444 "name": "BaseBdev3", 00:30:45.444 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:45.444 "is_configured": true, 00:30:45.444 "data_offset": 2048, 00:30:45.444 "data_size": 63488 00:30:45.444 }, 00:30:45.444 { 00:30:45.444 "name": "BaseBdev4", 00:30:45.444 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:45.444 "is_configured": true, 00:30:45.444 "data_offset": 2048, 00:30:45.444 "data_size": 63488 00:30:45.444 } 00:30:45.444 ] 00:30:45.444 }' 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:45.444 07:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.011 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a7ff74f-22ff-469d-be23-37539251b085 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.012 [2024-11-20 07:28:10.179920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:46.012 [2024-11-20 07:28:10.180472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:46.012 [2024-11-20 07:28:10.180511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:46.012 [2024-11-20 07:28:10.180885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:46.012 NewBaseBdev 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.012 [2024-11-20 07:28:10.188365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:46.012 [2024-11-20 07:28:10.188425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:46.012 [2024-11-20 07:28:10.188784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.012 [ 00:30:46.012 { 00:30:46.012 "name": "NewBaseBdev", 00:30:46.012 "aliases": [ 00:30:46.012 "9a7ff74f-22ff-469d-be23-37539251b085" 00:30:46.012 ], 00:30:46.012 "product_name": "Malloc disk", 00:30:46.012 "block_size": 512, 00:30:46.012 "num_blocks": 65536, 00:30:46.012 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:46.012 "assigned_rate_limits": { 00:30:46.012 "rw_ios_per_sec": 0, 00:30:46.012 "rw_mbytes_per_sec": 0, 00:30:46.012 "r_mbytes_per_sec": 0, 00:30:46.012 "w_mbytes_per_sec": 0 00:30:46.012 }, 00:30:46.012 "claimed": true, 00:30:46.012 "claim_type": "exclusive_write", 00:30:46.012 "zoned": false, 00:30:46.012 "supported_io_types": { 00:30:46.012 "read": true, 00:30:46.012 "write": true, 00:30:46.012 "unmap": true, 00:30:46.012 "flush": true, 00:30:46.012 "reset": true, 00:30:46.012 "nvme_admin": false, 00:30:46.012 "nvme_io": false, 00:30:46.012 "nvme_io_md": false, 00:30:46.012 "write_zeroes": true, 00:30:46.012 "zcopy": true, 00:30:46.012 "get_zone_info": false, 00:30:46.012 "zone_management": false, 00:30:46.012 "zone_append": false, 00:30:46.012 "compare": false, 00:30:46.012 "compare_and_write": false, 00:30:46.012 "abort": true, 00:30:46.012 "seek_hole": false, 00:30:46.012 "seek_data": false, 00:30:46.012 "copy": true, 00:30:46.012 "nvme_iov_md": false 00:30:46.012 }, 00:30:46.012 "memory_domains": [ 00:30:46.012 { 00:30:46.012 "dma_device_id": "system", 00:30:46.012 "dma_device_type": 1 00:30:46.012 }, 00:30:46.012 { 00:30:46.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:46.012 "dma_device_type": 2 00:30:46.012 } 00:30:46.012 ], 00:30:46.012 "driver_specific": {} 00:30:46.012 } 00:30:46.012 ] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.012 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.012 "name": "Existed_Raid", 00:30:46.012 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:46.012 "strip_size_kb": 64, 00:30:46.012 "state": "online", 00:30:46.012 "raid_level": "raid5f", 00:30:46.012 "superblock": true, 00:30:46.012 "num_base_bdevs": 4, 00:30:46.012 "num_base_bdevs_discovered": 4, 00:30:46.012 "num_base_bdevs_operational": 4, 00:30:46.012 "base_bdevs_list": [ 00:30:46.012 { 00:30:46.012 "name": "NewBaseBdev", 00:30:46.012 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:46.012 "is_configured": true, 00:30:46.013 "data_offset": 2048, 00:30:46.013 "data_size": 63488 00:30:46.013 }, 00:30:46.013 { 00:30:46.013 "name": "BaseBdev2", 00:30:46.013 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:46.013 "is_configured": true, 00:30:46.013 "data_offset": 2048, 00:30:46.013 "data_size": 63488 00:30:46.013 }, 00:30:46.013 { 00:30:46.013 "name": "BaseBdev3", 00:30:46.013 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:46.013 "is_configured": true, 00:30:46.013 "data_offset": 2048, 00:30:46.013 "data_size": 63488 00:30:46.013 }, 00:30:46.013 { 00:30:46.013 "name": "BaseBdev4", 00:30:46.013 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:46.013 "is_configured": true, 00:30:46.013 "data_offset": 2048, 00:30:46.013 "data_size": 63488 00:30:46.013 } 00:30:46.013 ] 00:30:46.013 }' 00:30:46.013 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.013 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.594 [2024-11-20 07:28:10.766426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:46.594 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.595 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.595 "name": "Existed_Raid", 00:30:46.595 "aliases": [ 00:30:46.595 "9cacc201-d591-44a1-a0db-d45ec7c8e0f4" 00:30:46.595 ], 00:30:46.595 "product_name": "Raid Volume", 00:30:46.595 "block_size": 512, 00:30:46.595 "num_blocks": 190464, 00:30:46.595 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:46.595 "assigned_rate_limits": { 00:30:46.595 "rw_ios_per_sec": 0, 00:30:46.595 "rw_mbytes_per_sec": 0, 00:30:46.595 "r_mbytes_per_sec": 0, 00:30:46.595 "w_mbytes_per_sec": 0 00:30:46.595 }, 00:30:46.595 "claimed": false, 00:30:46.595 "zoned": false, 00:30:46.595 "supported_io_types": { 00:30:46.595 "read": true, 00:30:46.595 "write": true, 00:30:46.595 "unmap": false, 00:30:46.595 "flush": false, 00:30:46.595 "reset": true, 00:30:46.595 "nvme_admin": false, 00:30:46.595 "nvme_io": false, 00:30:46.595 "nvme_io_md": false, 00:30:46.595 "write_zeroes": true, 00:30:46.595 "zcopy": false, 00:30:46.595 "get_zone_info": false, 00:30:46.595 "zone_management": false, 00:30:46.595 "zone_append": false, 00:30:46.595 "compare": false, 00:30:46.595 "compare_and_write": false, 00:30:46.595 "abort": false, 00:30:46.595 "seek_hole": false, 00:30:46.595 "seek_data": false, 00:30:46.595 "copy": false, 00:30:46.595 "nvme_iov_md": false 00:30:46.595 }, 00:30:46.595 "driver_specific": { 00:30:46.595 "raid": { 00:30:46.595 "uuid": "9cacc201-d591-44a1-a0db-d45ec7c8e0f4", 00:30:46.595 "strip_size_kb": 64, 00:30:46.595 "state": "online", 00:30:46.595 "raid_level": "raid5f", 00:30:46.595 "superblock": true, 00:30:46.595 "num_base_bdevs": 4, 00:30:46.595 "num_base_bdevs_discovered": 4, 00:30:46.595 "num_base_bdevs_operational": 4, 00:30:46.595 "base_bdevs_list": [ 00:30:46.595 { 00:30:46.595 "name": "NewBaseBdev", 00:30:46.595 "uuid": "9a7ff74f-22ff-469d-be23-37539251b085", 00:30:46.595 "is_configured": true, 00:30:46.595 "data_offset": 2048, 00:30:46.595 "data_size": 63488 00:30:46.595 }, 00:30:46.595 { 00:30:46.595 "name": "BaseBdev2", 00:30:46.595 "uuid": "91ca7174-412d-445f-b70d-defcbf647d67", 00:30:46.595 "is_configured": true, 00:30:46.595 "data_offset": 2048, 00:30:46.595 "data_size": 63488 00:30:46.595 }, 00:30:46.595 { 00:30:46.595 "name": "BaseBdev3", 00:30:46.595 "uuid": "b37697f2-b377-4ebc-9b18-4cdbcb111685", 00:30:46.595 "is_configured": true, 00:30:46.595 "data_offset": 2048, 00:30:46.595 "data_size": 63488 00:30:46.595 }, 00:30:46.595 { 00:30:46.595 "name": "BaseBdev4", 00:30:46.595 "uuid": "343b7ae2-8851-4b69-955d-8d700b28c55d", 00:30:46.595 "is_configured": true, 00:30:46.595 "data_offset": 2048, 00:30:46.595 "data_size": 63488 00:30:46.595 } 00:30:46.595 ] 00:30:46.595 } 00:30:46.595 } 00:30:46.595 }' 00:30:46.595 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:46.595 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:46.595 BaseBdev2 00:30:46.595 BaseBdev3 00:30:46.595 BaseBdev4' 00:30:46.595 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:46.855 07:28:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.855 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.855 [2024-11-20 07:28:11.142191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:46.855 [2024-11-20 07:28:11.142344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:46.855 [2024-11-20 07:28:11.142450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:46.855 [2024-11-20 07:28:11.142838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:46.855 [2024-11-20 07:28:11.142857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84023 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84023 ']' 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84023 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84023 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.115 killing process with pid 84023 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84023' 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84023 00:30:47.115 [2024-11-20 07:28:11.180656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:47.115 07:28:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84023 00:30:47.374 [2024-11-20 07:28:11.564672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:48.751 07:28:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:48.751 00:30:48.751 real 0m13.169s 00:30:48.751 user 0m21.825s 00:30:48.751 sys 0m1.789s 00:30:48.751 07:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.751 ************************************ 00:30:48.751 END TEST raid5f_state_function_test_sb 00:30:48.751 ************************************ 00:30:48.751 07:28:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.751 07:28:12 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:30:48.751 07:28:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:48.751 07:28:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.751 07:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:48.751 ************************************ 00:30:48.751 START TEST raid5f_superblock_test 00:30:48.751 ************************************ 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84706 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84706 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84706 ']' 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.751 07:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.751 [2024-11-20 07:28:12.879188] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:48.751 [2024-11-20 07:28:12.879403] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84706 ] 00:30:49.010 [2024-11-20 07:28:13.074968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.010 [2024-11-20 07:28:13.218008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.269 [2024-11-20 07:28:13.451364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:49.269 [2024-11-20 07:28:13.451419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.837 malloc1 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.837 [2024-11-20 07:28:13.937576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:49.837 [2024-11-20 07:28:13.937658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.837 [2024-11-20 07:28:13.937691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:49.837 [2024-11-20 07:28:13.937706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.837 [2024-11-20 07:28:13.940861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.837 [2024-11-20 07:28:13.940903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:49.837 pt1 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.837 malloc2 00:30:49.837 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.838 07:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:49.838 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.838 07:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.838 [2024-11-20 07:28:13.998597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:49.838 [2024-11-20 07:28:13.998683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.838 [2024-11-20 07:28:13.998714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:49.838 [2024-11-20 07:28:13.998728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.838 [2024-11-20 07:28:14.001969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.838 [2024-11-20 07:28:14.002012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:49.838 pt2 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.838 malloc3 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.838 [2024-11-20 07:28:14.066879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:49.838 [2024-11-20 07:28:14.066954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:49.838 [2024-11-20 07:28:14.066985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:49.838 [2024-11-20 07:28:14.067020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:49.838 [2024-11-20 07:28:14.070054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:49.838 [2024-11-20 07:28:14.070117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:49.838 pt3 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.838 malloc4 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.838 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.097 [2024-11-20 07:28:14.130163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:50.097 [2024-11-20 07:28:14.130233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:50.097 [2024-11-20 07:28:14.130263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:50.097 [2024-11-20 07:28:14.130294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:50.097 [2024-11-20 07:28:14.133449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:50.097 [2024-11-20 07:28:14.133490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:50.097 pt4 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.097 [2024-11-20 07:28:14.142289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:50.097 [2024-11-20 07:28:14.145281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:50.097 [2024-11-20 07:28:14.145376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:50.097 [2024-11-20 07:28:14.145480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:50.097 [2024-11-20 07:28:14.145831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:50.097 [2024-11-20 07:28:14.145865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:50.097 [2024-11-20 07:28:14.146172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:50.097 [2024-11-20 07:28:14.153701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:50.097 [2024-11-20 07:28:14.153763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:50.097 [2024-11-20 07:28:14.154016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.097 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.097 "name": "raid_bdev1", 00:30:50.097 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:50.097 "strip_size_kb": 64, 00:30:50.097 "state": "online", 00:30:50.097 "raid_level": "raid5f", 00:30:50.097 "superblock": true, 00:30:50.097 "num_base_bdevs": 4, 00:30:50.097 "num_base_bdevs_discovered": 4, 00:30:50.097 "num_base_bdevs_operational": 4, 00:30:50.097 "base_bdevs_list": [ 00:30:50.097 { 00:30:50.097 "name": "pt1", 00:30:50.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:50.097 "is_configured": true, 00:30:50.097 "data_offset": 2048, 00:30:50.097 "data_size": 63488 00:30:50.097 }, 00:30:50.097 { 00:30:50.097 "name": "pt2", 00:30:50.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:50.097 "is_configured": true, 00:30:50.098 "data_offset": 2048, 00:30:50.098 "data_size": 63488 00:30:50.098 }, 00:30:50.098 { 00:30:50.098 "name": "pt3", 00:30:50.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:50.098 "is_configured": true, 00:30:50.098 "data_offset": 2048, 00:30:50.098 "data_size": 63488 00:30:50.098 }, 00:30:50.098 { 00:30:50.098 "name": "pt4", 00:30:50.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:50.098 "is_configured": true, 00:30:50.098 "data_offset": 2048, 00:30:50.098 "data_size": 63488 00:30:50.098 } 00:30:50.098 ] 00:30:50.098 }' 00:30:50.098 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.098 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 [2024-11-20 07:28:14.705964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:50.665 "name": "raid_bdev1", 00:30:50.665 "aliases": [ 00:30:50.665 "b6412add-f3e4-42dd-94a9-ad118bc27941" 00:30:50.665 ], 00:30:50.665 "product_name": "Raid Volume", 00:30:50.665 "block_size": 512, 00:30:50.665 "num_blocks": 190464, 00:30:50.665 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:50.665 "assigned_rate_limits": { 00:30:50.665 "rw_ios_per_sec": 0, 00:30:50.665 "rw_mbytes_per_sec": 0, 00:30:50.665 "r_mbytes_per_sec": 0, 00:30:50.665 "w_mbytes_per_sec": 0 00:30:50.665 }, 00:30:50.665 "claimed": false, 00:30:50.665 "zoned": false, 00:30:50.665 "supported_io_types": { 00:30:50.665 "read": true, 00:30:50.665 "write": true, 00:30:50.665 "unmap": false, 00:30:50.665 "flush": false, 00:30:50.665 "reset": true, 00:30:50.665 "nvme_admin": false, 00:30:50.665 "nvme_io": false, 00:30:50.665 "nvme_io_md": false, 00:30:50.665 "write_zeroes": true, 00:30:50.665 "zcopy": false, 00:30:50.665 "get_zone_info": false, 00:30:50.665 "zone_management": false, 00:30:50.665 "zone_append": false, 00:30:50.665 "compare": false, 00:30:50.665 "compare_and_write": false, 00:30:50.665 "abort": false, 00:30:50.665 "seek_hole": false, 00:30:50.665 "seek_data": false, 00:30:50.665 "copy": false, 00:30:50.665 "nvme_iov_md": false 00:30:50.665 }, 00:30:50.665 "driver_specific": { 00:30:50.665 "raid": { 00:30:50.665 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:50.665 "strip_size_kb": 64, 00:30:50.665 "state": "online", 00:30:50.665 "raid_level": "raid5f", 00:30:50.665 "superblock": true, 00:30:50.665 "num_base_bdevs": 4, 00:30:50.665 "num_base_bdevs_discovered": 4, 00:30:50.665 "num_base_bdevs_operational": 4, 00:30:50.665 "base_bdevs_list": [ 00:30:50.665 { 00:30:50.665 "name": "pt1", 00:30:50.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:50.665 "is_configured": true, 00:30:50.665 "data_offset": 2048, 00:30:50.665 "data_size": 63488 00:30:50.665 }, 00:30:50.665 { 00:30:50.665 "name": "pt2", 00:30:50.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:50.665 "is_configured": true, 00:30:50.665 "data_offset": 2048, 00:30:50.665 "data_size": 63488 00:30:50.665 }, 00:30:50.665 { 00:30:50.665 "name": "pt3", 00:30:50.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:50.665 "is_configured": true, 00:30:50.665 "data_offset": 2048, 00:30:50.665 "data_size": 63488 00:30:50.665 }, 00:30:50.665 { 00:30:50.665 "name": "pt4", 00:30:50.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:50.665 "is_configured": true, 00:30:50.665 "data_offset": 2048, 00:30:50.665 "data_size": 63488 00:30:50.665 } 00:30:50.665 ] 00:30:50.665 } 00:30:50.665 } 00:30:50.665 }' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:50.665 pt2 00:30:50.665 pt3 00:30:50.665 pt4' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.665 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.924 07:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.924 [2024-11-20 07:28:15.085991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b6412add-f3e4-42dd-94a9-ad118bc27941 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b6412add-f3e4-42dd-94a9-ad118bc27941 ']' 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.924 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.924 [2024-11-20 07:28:15.133808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:50.924 [2024-11-20 07:28:15.133840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:50.924 [2024-11-20 07:28:15.133934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:50.924 [2024-11-20 07:28:15.134043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:50.924 [2024-11-20 07:28:15.134066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.925 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 [2024-11-20 07:28:15.297863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:51.184 [2024-11-20 07:28:15.300434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:51.184 [2024-11-20 07:28:15.300514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:51.184 [2024-11-20 07:28:15.300563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:30:51.184 [2024-11-20 07:28:15.300680] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:51.184 [2024-11-20 07:28:15.300757] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:51.184 [2024-11-20 07:28:15.300790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:51.184 [2024-11-20 07:28:15.300820] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:30:51.184 [2024-11-20 07:28:15.300841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:51.184 [2024-11-20 07:28:15.300856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:51.184 request: 00:30:51.184 { 00:30:51.184 "name": "raid_bdev1", 00:30:51.184 "raid_level": "raid5f", 00:30:51.184 "base_bdevs": [ 00:30:51.184 "malloc1", 00:30:51.184 "malloc2", 00:30:51.184 "malloc3", 00:30:51.184 "malloc4" 00:30:51.184 ], 00:30:51.184 "strip_size_kb": 64, 00:30:51.184 "superblock": false, 00:30:51.184 "method": "bdev_raid_create", 00:30:51.184 "req_id": 1 00:30:51.184 } 00:30:51.184 Got JSON-RPC error response 00:30:51.184 response: 00:30:51.184 { 00:30:51.184 "code": -17, 00:30:51.184 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:51.184 } 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.184 [2024-11-20 07:28:15.365860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:51.184 [2024-11-20 07:28:15.365914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:51.184 [2024-11-20 07:28:15.365953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:51.184 [2024-11-20 07:28:15.365999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:51.184 [2024-11-20 07:28:15.368884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:51.184 [2024-11-20 07:28:15.368930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:51.184 [2024-11-20 07:28:15.369052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:51.184 [2024-11-20 07:28:15.369125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:51.184 pt1 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:51.184 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:51.185 "name": "raid_bdev1", 00:30:51.185 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:51.185 "strip_size_kb": 64, 00:30:51.185 "state": "configuring", 00:30:51.185 "raid_level": "raid5f", 00:30:51.185 "superblock": true, 00:30:51.185 "num_base_bdevs": 4, 00:30:51.185 "num_base_bdevs_discovered": 1, 00:30:51.185 "num_base_bdevs_operational": 4, 00:30:51.185 "base_bdevs_list": [ 00:30:51.185 { 00:30:51.185 "name": "pt1", 00:30:51.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:51.185 "is_configured": true, 00:30:51.185 "data_offset": 2048, 00:30:51.185 "data_size": 63488 00:30:51.185 }, 00:30:51.185 { 00:30:51.185 "name": null, 00:30:51.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:51.185 "is_configured": false, 00:30:51.185 "data_offset": 2048, 00:30:51.185 "data_size": 63488 00:30:51.185 }, 00:30:51.185 { 00:30:51.185 "name": null, 00:30:51.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:51.185 "is_configured": false, 00:30:51.185 "data_offset": 2048, 00:30:51.185 "data_size": 63488 00:30:51.185 }, 00:30:51.185 { 00:30:51.185 "name": null, 00:30:51.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:51.185 "is_configured": false, 00:30:51.185 "data_offset": 2048, 00:30:51.185 "data_size": 63488 00:30:51.185 } 00:30:51.185 ] 00:30:51.185 }' 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:51.185 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.752 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.753 [2024-11-20 07:28:15.890114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:51.753 [2024-11-20 07:28:15.890195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:51.753 [2024-11-20 07:28:15.890223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:51.753 [2024-11-20 07:28:15.890255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:51.753 [2024-11-20 07:28:15.890877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:51.753 [2024-11-20 07:28:15.890908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:51.753 [2024-11-20 07:28:15.891047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:51.753 [2024-11-20 07:28:15.891094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:51.753 pt2 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.753 [2024-11-20 07:28:15.902047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:51.753 "name": "raid_bdev1", 00:30:51.753 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:51.753 "strip_size_kb": 64, 00:30:51.753 "state": "configuring", 00:30:51.753 "raid_level": "raid5f", 00:30:51.753 "superblock": true, 00:30:51.753 "num_base_bdevs": 4, 00:30:51.753 "num_base_bdevs_discovered": 1, 00:30:51.753 "num_base_bdevs_operational": 4, 00:30:51.753 "base_bdevs_list": [ 00:30:51.753 { 00:30:51.753 "name": "pt1", 00:30:51.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:51.753 "is_configured": true, 00:30:51.753 "data_offset": 2048, 00:30:51.753 "data_size": 63488 00:30:51.753 }, 00:30:51.753 { 00:30:51.753 "name": null, 00:30:51.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:51.753 "is_configured": false, 00:30:51.753 "data_offset": 0, 00:30:51.753 "data_size": 63488 00:30:51.753 }, 00:30:51.753 { 00:30:51.753 "name": null, 00:30:51.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:51.753 "is_configured": false, 00:30:51.753 "data_offset": 2048, 00:30:51.753 "data_size": 63488 00:30:51.753 }, 00:30:51.753 { 00:30:51.753 "name": null, 00:30:51.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:51.753 "is_configured": false, 00:30:51.753 "data_offset": 2048, 00:30:51.753 "data_size": 63488 00:30:51.753 } 00:30:51.753 ] 00:30:51.753 }' 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:51.753 07:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.323 [2024-11-20 07:28:16.454264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:52.323 [2024-11-20 07:28:16.454348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.323 [2024-11-20 07:28:16.454377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:52.323 [2024-11-20 07:28:16.454391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.323 [2024-11-20 07:28:16.454990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.323 [2024-11-20 07:28:16.455075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:52.323 [2024-11-20 07:28:16.455178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:52.323 [2024-11-20 07:28:16.455209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:52.323 pt2 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.323 [2024-11-20 07:28:16.466269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:52.323 [2024-11-20 07:28:16.466347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.323 [2024-11-20 07:28:16.466374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:52.323 [2024-11-20 07:28:16.466387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.323 [2024-11-20 07:28:16.466987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.323 [2024-11-20 07:28:16.467063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:52.323 [2024-11-20 07:28:16.467163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:52.323 [2024-11-20 07:28:16.467194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:52.323 pt3 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.323 [2024-11-20 07:28:16.474202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:52.323 [2024-11-20 07:28:16.474251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.323 [2024-11-20 07:28:16.474278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:30:52.323 [2024-11-20 07:28:16.474291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.323 [2024-11-20 07:28:16.474800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.323 [2024-11-20 07:28:16.474847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:52.323 [2024-11-20 07:28:16.474927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:52.323 [2024-11-20 07:28:16.474968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:52.323 [2024-11-20 07:28:16.475172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:52.323 [2024-11-20 07:28:16.475194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:52.323 [2024-11-20 07:28:16.475501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:52.323 [2024-11-20 07:28:16.481332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:52.323 [2024-11-20 07:28:16.481381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:52.323 [2024-11-20 07:28:16.481623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:52.323 pt4 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.323 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:52.323 "name": "raid_bdev1", 00:30:52.323 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:52.323 "strip_size_kb": 64, 00:30:52.323 "state": "online", 00:30:52.323 "raid_level": "raid5f", 00:30:52.323 "superblock": true, 00:30:52.323 "num_base_bdevs": 4, 00:30:52.324 "num_base_bdevs_discovered": 4, 00:30:52.324 "num_base_bdevs_operational": 4, 00:30:52.324 "base_bdevs_list": [ 00:30:52.324 { 00:30:52.324 "name": "pt1", 00:30:52.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:52.324 "is_configured": true, 00:30:52.324 "data_offset": 2048, 00:30:52.324 "data_size": 63488 00:30:52.324 }, 00:30:52.324 { 00:30:52.324 "name": "pt2", 00:30:52.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:52.324 "is_configured": true, 00:30:52.324 "data_offset": 2048, 00:30:52.324 "data_size": 63488 00:30:52.324 }, 00:30:52.324 { 00:30:52.324 "name": "pt3", 00:30:52.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:52.324 "is_configured": true, 00:30:52.324 "data_offset": 2048, 00:30:52.324 "data_size": 63488 00:30:52.324 }, 00:30:52.324 { 00:30:52.324 "name": "pt4", 00:30:52.324 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:52.324 "is_configured": true, 00:30:52.324 "data_offset": 2048, 00:30:52.324 "data_size": 63488 00:30:52.324 } 00:30:52.324 ] 00:30:52.324 }' 00:30:52.324 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:52.324 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:52.893 07:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.893 [2024-11-20 07:28:16.997549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.893 "name": "raid_bdev1", 00:30:52.893 "aliases": [ 00:30:52.893 "b6412add-f3e4-42dd-94a9-ad118bc27941" 00:30:52.893 ], 00:30:52.893 "product_name": "Raid Volume", 00:30:52.893 "block_size": 512, 00:30:52.893 "num_blocks": 190464, 00:30:52.893 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:52.893 "assigned_rate_limits": { 00:30:52.893 "rw_ios_per_sec": 0, 00:30:52.893 "rw_mbytes_per_sec": 0, 00:30:52.893 "r_mbytes_per_sec": 0, 00:30:52.893 "w_mbytes_per_sec": 0 00:30:52.893 }, 00:30:52.893 "claimed": false, 00:30:52.893 "zoned": false, 00:30:52.893 "supported_io_types": { 00:30:52.893 "read": true, 00:30:52.893 "write": true, 00:30:52.893 "unmap": false, 00:30:52.893 "flush": false, 00:30:52.893 "reset": true, 00:30:52.893 "nvme_admin": false, 00:30:52.893 "nvme_io": false, 00:30:52.893 "nvme_io_md": false, 00:30:52.893 "write_zeroes": true, 00:30:52.893 "zcopy": false, 00:30:52.893 "get_zone_info": false, 00:30:52.893 "zone_management": false, 00:30:52.893 "zone_append": false, 00:30:52.893 "compare": false, 00:30:52.893 "compare_and_write": false, 00:30:52.893 "abort": false, 00:30:52.893 "seek_hole": false, 00:30:52.893 "seek_data": false, 00:30:52.893 "copy": false, 00:30:52.893 "nvme_iov_md": false 00:30:52.893 }, 00:30:52.893 "driver_specific": { 00:30:52.893 "raid": { 00:30:52.893 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:52.893 "strip_size_kb": 64, 00:30:52.893 "state": "online", 00:30:52.893 "raid_level": "raid5f", 00:30:52.893 "superblock": true, 00:30:52.893 "num_base_bdevs": 4, 00:30:52.893 "num_base_bdevs_discovered": 4, 00:30:52.893 "num_base_bdevs_operational": 4, 00:30:52.893 "base_bdevs_list": [ 00:30:52.893 { 00:30:52.893 "name": "pt1", 00:30:52.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:52.893 "is_configured": true, 00:30:52.893 "data_offset": 2048, 00:30:52.893 "data_size": 63488 00:30:52.893 }, 00:30:52.893 { 00:30:52.893 "name": "pt2", 00:30:52.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:52.893 "is_configured": true, 00:30:52.893 "data_offset": 2048, 00:30:52.893 "data_size": 63488 00:30:52.893 }, 00:30:52.893 { 00:30:52.893 "name": "pt3", 00:30:52.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:52.893 "is_configured": true, 00:30:52.893 "data_offset": 2048, 00:30:52.893 "data_size": 63488 00:30:52.893 }, 00:30:52.893 { 00:30:52.893 "name": "pt4", 00:30:52.893 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:52.893 "is_configured": true, 00:30:52.893 "data_offset": 2048, 00:30:52.893 "data_size": 63488 00:30:52.893 } 00:30:52.893 ] 00:30:52.893 } 00:30:52.893 } 00:30:52.893 }' 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:52.893 pt2 00:30:52.893 pt3 00:30:52.893 pt4' 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.893 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:53.154 [2024-11-20 07:28:17.381718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b6412add-f3e4-42dd-94a9-ad118bc27941 '!=' b6412add-f3e4-42dd-94a9-ad118bc27941 ']' 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.154 [2024-11-20 07:28:17.429491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.154 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.413 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.413 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.413 "name": "raid_bdev1", 00:30:53.413 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:53.413 "strip_size_kb": 64, 00:30:53.413 "state": "online", 00:30:53.413 "raid_level": "raid5f", 00:30:53.413 "superblock": true, 00:30:53.413 "num_base_bdevs": 4, 00:30:53.413 "num_base_bdevs_discovered": 3, 00:30:53.413 "num_base_bdevs_operational": 3, 00:30:53.413 "base_bdevs_list": [ 00:30:53.413 { 00:30:53.413 "name": null, 00:30:53.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.413 "is_configured": false, 00:30:53.413 "data_offset": 0, 00:30:53.413 "data_size": 63488 00:30:53.413 }, 00:30:53.413 { 00:30:53.413 "name": "pt2", 00:30:53.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:53.413 "is_configured": true, 00:30:53.413 "data_offset": 2048, 00:30:53.413 "data_size": 63488 00:30:53.413 }, 00:30:53.413 { 00:30:53.413 "name": "pt3", 00:30:53.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:53.413 "is_configured": true, 00:30:53.413 "data_offset": 2048, 00:30:53.413 "data_size": 63488 00:30:53.413 }, 00:30:53.413 { 00:30:53.413 "name": "pt4", 00:30:53.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:53.413 "is_configured": true, 00:30:53.413 "data_offset": 2048, 00:30:53.413 "data_size": 63488 00:30:53.413 } 00:30:53.413 ] 00:30:53.413 }' 00:30:53.413 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.413 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.672 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:53.672 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.672 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.672 [2024-11-20 07:28:17.957676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:53.672 [2024-11-20 07:28:17.957758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:53.672 [2024-11-20 07:28:17.957851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:53.672 [2024-11-20 07:28:17.957998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:53.672 [2024-11-20 07:28:17.958020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:53.931 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.931 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.931 07:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:30:53.931 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.931 07:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.931 [2024-11-20 07:28:18.049657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:53.931 [2024-11-20 07:28:18.049724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.931 [2024-11-20 07:28:18.049769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:30:53.931 [2024-11-20 07:28:18.049784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.931 [2024-11-20 07:28:18.052926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.931 [2024-11-20 07:28:18.052967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:53.931 [2024-11-20 07:28:18.053078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:53.931 [2024-11-20 07:28:18.053144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:53.931 pt2 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.931 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.931 "name": "raid_bdev1", 00:30:53.931 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:53.931 "strip_size_kb": 64, 00:30:53.931 "state": "configuring", 00:30:53.931 "raid_level": "raid5f", 00:30:53.931 "superblock": true, 00:30:53.931 "num_base_bdevs": 4, 00:30:53.931 "num_base_bdevs_discovered": 1, 00:30:53.931 "num_base_bdevs_operational": 3, 00:30:53.931 "base_bdevs_list": [ 00:30:53.931 { 00:30:53.931 "name": null, 00:30:53.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.931 "is_configured": false, 00:30:53.931 "data_offset": 2048, 00:30:53.931 "data_size": 63488 00:30:53.932 }, 00:30:53.932 { 00:30:53.932 "name": "pt2", 00:30:53.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:53.932 "is_configured": true, 00:30:53.932 "data_offset": 2048, 00:30:53.932 "data_size": 63488 00:30:53.932 }, 00:30:53.932 { 00:30:53.932 "name": null, 00:30:53.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:53.932 "is_configured": false, 00:30:53.932 "data_offset": 2048, 00:30:53.932 "data_size": 63488 00:30:53.932 }, 00:30:53.932 { 00:30:53.932 "name": null, 00:30:53.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:53.932 "is_configured": false, 00:30:53.932 "data_offset": 2048, 00:30:53.932 "data_size": 63488 00:30:53.932 } 00:30:53.932 ] 00:30:53.932 }' 00:30:53.932 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.932 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.516 [2024-11-20 07:28:18.589955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:54.516 [2024-11-20 07:28:18.590040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.516 [2024-11-20 07:28:18.590073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:54.516 [2024-11-20 07:28:18.590088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.516 [2024-11-20 07:28:18.590697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.516 [2024-11-20 07:28:18.590722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:54.516 [2024-11-20 07:28:18.590831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:54.516 [2024-11-20 07:28:18.590868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:54.516 pt3 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.516 "name": "raid_bdev1", 00:30:54.516 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:54.516 "strip_size_kb": 64, 00:30:54.516 "state": "configuring", 00:30:54.516 "raid_level": "raid5f", 00:30:54.516 "superblock": true, 00:30:54.516 "num_base_bdevs": 4, 00:30:54.516 "num_base_bdevs_discovered": 2, 00:30:54.516 "num_base_bdevs_operational": 3, 00:30:54.516 "base_bdevs_list": [ 00:30:54.516 { 00:30:54.516 "name": null, 00:30:54.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.516 "is_configured": false, 00:30:54.516 "data_offset": 2048, 00:30:54.516 "data_size": 63488 00:30:54.516 }, 00:30:54.516 { 00:30:54.516 "name": "pt2", 00:30:54.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:54.516 "is_configured": true, 00:30:54.516 "data_offset": 2048, 00:30:54.516 "data_size": 63488 00:30:54.516 }, 00:30:54.516 { 00:30:54.516 "name": "pt3", 00:30:54.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:54.516 "is_configured": true, 00:30:54.516 "data_offset": 2048, 00:30:54.516 "data_size": 63488 00:30:54.516 }, 00:30:54.516 { 00:30:54.516 "name": null, 00:30:54.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:54.516 "is_configured": false, 00:30:54.516 "data_offset": 2048, 00:30:54.516 "data_size": 63488 00:30:54.516 } 00:30:54.516 ] 00:30:54.516 }' 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.516 07:28:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.082 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:30:55.082 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:55.082 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:30:55.082 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:55.082 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.082 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.082 [2024-11-20 07:28:19.154239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:55.082 [2024-11-20 07:28:19.154337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:55.082 [2024-11-20 07:28:19.154369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:55.082 [2024-11-20 07:28:19.154383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:55.082 [2024-11-20 07:28:19.155075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:55.082 [2024-11-20 07:28:19.155102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:55.083 [2024-11-20 07:28:19.155210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:55.083 [2024-11-20 07:28:19.155241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:55.083 [2024-11-20 07:28:19.155411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:55.083 [2024-11-20 07:28:19.155426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:55.083 [2024-11-20 07:28:19.155798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:55.083 [2024-11-20 07:28:19.162915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:55.083 [2024-11-20 07:28:19.162958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:55.083 [2024-11-20 07:28:19.163345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.083 pt4 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.083 "name": "raid_bdev1", 00:30:55.083 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:55.083 "strip_size_kb": 64, 00:30:55.083 "state": "online", 00:30:55.083 "raid_level": "raid5f", 00:30:55.083 "superblock": true, 00:30:55.083 "num_base_bdevs": 4, 00:30:55.083 "num_base_bdevs_discovered": 3, 00:30:55.083 "num_base_bdevs_operational": 3, 00:30:55.083 "base_bdevs_list": [ 00:30:55.083 { 00:30:55.083 "name": null, 00:30:55.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.083 "is_configured": false, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 }, 00:30:55.083 { 00:30:55.083 "name": "pt2", 00:30:55.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:55.083 "is_configured": true, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 }, 00:30:55.083 { 00:30:55.083 "name": "pt3", 00:30:55.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:55.083 "is_configured": true, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 }, 00:30:55.083 { 00:30:55.083 "name": "pt4", 00:30:55.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:55.083 "is_configured": true, 00:30:55.083 "data_offset": 2048, 00:30:55.083 "data_size": 63488 00:30:55.083 } 00:30:55.083 ] 00:30:55.083 }' 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.083 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.651 [2024-11-20 07:28:19.687837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:55.651 [2024-11-20 07:28:19.687871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:55.651 [2024-11-20 07:28:19.687966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:55.651 [2024-11-20 07:28:19.688073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:55.651 [2024-11-20 07:28:19.688094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.651 [2024-11-20 07:28:19.759787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:55.651 [2024-11-20 07:28:19.759873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:55.651 [2024-11-20 07:28:19.759909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:30:55.651 [2024-11-20 07:28:19.759927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:55.651 [2024-11-20 07:28:19.762964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:55.651 [2024-11-20 07:28:19.763029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:55.651 [2024-11-20 07:28:19.763132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:55.651 [2024-11-20 07:28:19.763201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:55.651 [2024-11-20 07:28:19.763371] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:55.651 [2024-11-20 07:28:19.763393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:55.651 [2024-11-20 07:28:19.763414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:30:55.651 [2024-11-20 07:28:19.763485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:55.651 [2024-11-20 07:28:19.763642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:55.651 pt1 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.651 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.651 "name": "raid_bdev1", 00:30:55.651 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:55.651 "strip_size_kb": 64, 00:30:55.651 "state": "configuring", 00:30:55.651 "raid_level": "raid5f", 00:30:55.651 "superblock": true, 00:30:55.651 "num_base_bdevs": 4, 00:30:55.651 "num_base_bdevs_discovered": 2, 00:30:55.651 "num_base_bdevs_operational": 3, 00:30:55.651 "base_bdevs_list": [ 00:30:55.651 { 00:30:55.651 "name": null, 00:30:55.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.651 "is_configured": false, 00:30:55.651 "data_offset": 2048, 00:30:55.651 "data_size": 63488 00:30:55.651 }, 00:30:55.651 { 00:30:55.651 "name": "pt2", 00:30:55.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:55.651 "is_configured": true, 00:30:55.651 "data_offset": 2048, 00:30:55.651 "data_size": 63488 00:30:55.651 }, 00:30:55.651 { 00:30:55.651 "name": "pt3", 00:30:55.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:55.651 "is_configured": true, 00:30:55.651 "data_offset": 2048, 00:30:55.651 "data_size": 63488 00:30:55.651 }, 00:30:55.651 { 00:30:55.651 "name": null, 00:30:55.651 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:55.651 "is_configured": false, 00:30:55.652 "data_offset": 2048, 00:30:55.652 "data_size": 63488 00:30:55.652 } 00:30:55.652 ] 00:30:55.652 }' 00:30:55.652 07:28:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.652 07:28:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.219 [2024-11-20 07:28:20.388065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:56.219 [2024-11-20 07:28:20.388279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.219 [2024-11-20 07:28:20.388328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:30:56.219 [2024-11-20 07:28:20.388346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.219 [2024-11-20 07:28:20.388907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.219 [2024-11-20 07:28:20.388933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:56.219 [2024-11-20 07:28:20.389034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:56.219 [2024-11-20 07:28:20.389071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:56.219 [2024-11-20 07:28:20.389248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:30:56.219 [2024-11-20 07:28:20.389264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:56.219 [2024-11-20 07:28:20.389652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:56.219 [2024-11-20 07:28:20.396568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:30:56.219 [2024-11-20 07:28:20.396629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:30:56.219 [2024-11-20 07:28:20.396996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.219 pt4 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.219 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.219 "name": "raid_bdev1", 00:30:56.219 "uuid": "b6412add-f3e4-42dd-94a9-ad118bc27941", 00:30:56.219 "strip_size_kb": 64, 00:30:56.219 "state": "online", 00:30:56.219 "raid_level": "raid5f", 00:30:56.219 "superblock": true, 00:30:56.219 "num_base_bdevs": 4, 00:30:56.219 "num_base_bdevs_discovered": 3, 00:30:56.219 "num_base_bdevs_operational": 3, 00:30:56.219 "base_bdevs_list": [ 00:30:56.219 { 00:30:56.219 "name": null, 00:30:56.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.219 "is_configured": false, 00:30:56.219 "data_offset": 2048, 00:30:56.219 "data_size": 63488 00:30:56.219 }, 00:30:56.219 { 00:30:56.219 "name": "pt2", 00:30:56.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:56.219 "is_configured": true, 00:30:56.219 "data_offset": 2048, 00:30:56.219 "data_size": 63488 00:30:56.219 }, 00:30:56.219 { 00:30:56.219 "name": "pt3", 00:30:56.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:56.220 "is_configured": true, 00:30:56.220 "data_offset": 2048, 00:30:56.220 "data_size": 63488 00:30:56.220 }, 00:30:56.220 { 00:30:56.220 "name": "pt4", 00:30:56.220 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:56.220 "is_configured": true, 00:30:56.220 "data_offset": 2048, 00:30:56.220 "data_size": 63488 00:30:56.220 } 00:30:56.220 ] 00:30:56.220 }' 00:30:56.220 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.220 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.787 07:28:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:30:56.787 [2024-11-20 07:28:20.989067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b6412add-f3e4-42dd-94a9-ad118bc27941 '!=' b6412add-f3e4-42dd-94a9-ad118bc27941 ']' 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84706 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84706 ']' 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84706 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.787 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84706 00:30:57.046 killing process with pid 84706 00:30:57.046 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.046 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.046 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84706' 00:30:57.046 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84706 00:30:57.046 [2024-11-20 07:28:21.077060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:57.046 07:28:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84706 00:30:57.046 [2024-11-20 07:28:21.077180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:57.046 [2024-11-20 07:28:21.077281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:57.046 [2024-11-20 07:28:21.077316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:30:57.305 [2024-11-20 07:28:21.449130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:58.242 07:28:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:30:58.242 00:30:58.242 real 0m9.749s 00:30:58.242 user 0m15.984s 00:30:58.242 sys 0m1.468s 00:30:58.242 ************************************ 00:30:58.242 END TEST raid5f_superblock_test 00:30:58.242 ************************************ 00:30:58.242 07:28:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.242 07:28:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.501 07:28:22 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:30:58.501 07:28:22 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:30:58.501 07:28:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:58.501 07:28:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.501 07:28:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:58.501 ************************************ 00:30:58.501 START TEST raid5f_rebuild_test 00:30:58.501 ************************************ 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85203 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85203 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85203 ']' 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.501 07:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.501 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:58.501 Zero copy mechanism will not be used. 00:30:58.501 [2024-11-20 07:28:22.694357] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:58.501 [2024-11-20 07:28:22.694671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85203 ] 00:30:58.760 [2024-11-20 07:28:22.907969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.760 [2024-11-20 07:28:23.033427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.019 [2024-11-20 07:28:23.227274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:59.019 [2024-11-20 07:28:23.227318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.587 BaseBdev1_malloc 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.587 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.587 [2024-11-20 07:28:23.665604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:59.588 [2024-11-20 07:28:23.665687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.588 [2024-11-20 07:28:23.665737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:59.588 [2024-11-20 07:28:23.665774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.588 [2024-11-20 07:28:23.668421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.588 [2024-11-20 07:28:23.668483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:59.588 BaseBdev1 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 BaseBdev2_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 [2024-11-20 07:28:23.716015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:59.588 [2024-11-20 07:28:23.716114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.588 [2024-11-20 07:28:23.716141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:59.588 [2024-11-20 07:28:23.716160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.588 [2024-11-20 07:28:23.719024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.588 [2024-11-20 07:28:23.719088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:59.588 BaseBdev2 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 BaseBdev3_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 [2024-11-20 07:28:23.776699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:59.588 [2024-11-20 07:28:23.776760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.588 [2024-11-20 07:28:23.776789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:59.588 [2024-11-20 07:28:23.776806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.588 [2024-11-20 07:28:23.779475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.588 [2024-11-20 07:28:23.779547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:59.588 BaseBdev3 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 BaseBdev4_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 [2024-11-20 07:28:23.825311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:59.588 [2024-11-20 07:28:23.825388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.588 [2024-11-20 07:28:23.825416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:59.588 [2024-11-20 07:28:23.825433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.588 [2024-11-20 07:28:23.828094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.588 [2024-11-20 07:28:23.828158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:59.588 BaseBdev4 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 spare_malloc 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.847 spare_delay 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.847 [2024-11-20 07:28:23.882920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:59.847 [2024-11-20 07:28:23.882983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.847 [2024-11-20 07:28:23.883053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:59.847 [2024-11-20 07:28:23.883073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.847 [2024-11-20 07:28:23.885689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.847 [2024-11-20 07:28:23.885885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:59.847 spare 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.847 [2024-11-20 07:28:23.894982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:59.847 [2024-11-20 07:28:23.897411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:59.847 [2024-11-20 07:28:23.897646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:59.847 [2024-11-20 07:28:23.897826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:59.847 [2024-11-20 07:28:23.898059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:59.847 [2024-11-20 07:28:23.898174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:59.847 [2024-11-20 07:28:23.898539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:59.847 [2024-11-20 07:28:23.904836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:59.847 [2024-11-20 07:28:23.904859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:59.847 [2024-11-20 07:28:23.905105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.847 "name": "raid_bdev1", 00:30:59.847 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:30:59.847 "strip_size_kb": 64, 00:30:59.847 "state": "online", 00:30:59.847 "raid_level": "raid5f", 00:30:59.847 "superblock": false, 00:30:59.847 "num_base_bdevs": 4, 00:30:59.847 "num_base_bdevs_discovered": 4, 00:30:59.847 "num_base_bdevs_operational": 4, 00:30:59.847 "base_bdevs_list": [ 00:30:59.847 { 00:30:59.847 "name": "BaseBdev1", 00:30:59.847 "uuid": "5006c756-1c89-5ba4-9b48-f73e6233c944", 00:30:59.847 "is_configured": true, 00:30:59.847 "data_offset": 0, 00:30:59.847 "data_size": 65536 00:30:59.847 }, 00:30:59.847 { 00:30:59.847 "name": "BaseBdev2", 00:30:59.847 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:30:59.847 "is_configured": true, 00:30:59.847 "data_offset": 0, 00:30:59.847 "data_size": 65536 00:30:59.847 }, 00:30:59.847 { 00:30:59.847 "name": "BaseBdev3", 00:30:59.847 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:30:59.847 "is_configured": true, 00:30:59.847 "data_offset": 0, 00:30:59.847 "data_size": 65536 00:30:59.847 }, 00:30:59.847 { 00:30:59.847 "name": "BaseBdev4", 00:30:59.847 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:30:59.847 "is_configured": true, 00:30:59.847 "data_offset": 0, 00:30:59.847 "data_size": 65536 00:30:59.847 } 00:30:59.847 ] 00:30:59.847 }' 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.847 07:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:00.415 [2024-11-20 07:28:24.432695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:00.415 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:00.674 [2024-11-20 07:28:24.816528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:00.674 /dev/nbd0 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:00.674 1+0 records in 00:31:00.674 1+0 records out 00:31:00.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283724 s, 14.4 MB/s 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:00.674 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:31:00.675 07:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:31:01.253 512+0 records in 00:31:01.253 512+0 records out 00:31:01.253 100663296 bytes (101 MB, 96 MiB) copied, 0.644811 s, 156 MB/s 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:01.253 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:01.823 [2024-11-20 07:28:25.853807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.823 [2024-11-20 07:28:25.869901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.823 "name": "raid_bdev1", 00:31:01.823 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:01.823 "strip_size_kb": 64, 00:31:01.823 "state": "online", 00:31:01.823 "raid_level": "raid5f", 00:31:01.823 "superblock": false, 00:31:01.823 "num_base_bdevs": 4, 00:31:01.823 "num_base_bdevs_discovered": 3, 00:31:01.823 "num_base_bdevs_operational": 3, 00:31:01.823 "base_bdevs_list": [ 00:31:01.823 { 00:31:01.823 "name": null, 00:31:01.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.823 "is_configured": false, 00:31:01.823 "data_offset": 0, 00:31:01.823 "data_size": 65536 00:31:01.823 }, 00:31:01.823 { 00:31:01.823 "name": "BaseBdev2", 00:31:01.823 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:01.823 "is_configured": true, 00:31:01.823 "data_offset": 0, 00:31:01.823 "data_size": 65536 00:31:01.823 }, 00:31:01.823 { 00:31:01.823 "name": "BaseBdev3", 00:31:01.823 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:01.823 "is_configured": true, 00:31:01.823 "data_offset": 0, 00:31:01.823 "data_size": 65536 00:31:01.823 }, 00:31:01.823 { 00:31:01.823 "name": "BaseBdev4", 00:31:01.823 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:01.823 "is_configured": true, 00:31:01.823 "data_offset": 0, 00:31:01.823 "data_size": 65536 00:31:01.823 } 00:31:01.823 ] 00:31:01.823 }' 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.823 07:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.391 07:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:02.391 07:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.391 07:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.391 [2024-11-20 07:28:26.398126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:02.391 [2024-11-20 07:28:26.413488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:31:02.391 07:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.391 07:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:02.391 [2024-11-20 07:28:26.422835] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:03.326 "name": "raid_bdev1", 00:31:03.326 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:03.326 "strip_size_kb": 64, 00:31:03.326 "state": "online", 00:31:03.326 "raid_level": "raid5f", 00:31:03.326 "superblock": false, 00:31:03.326 "num_base_bdevs": 4, 00:31:03.326 "num_base_bdevs_discovered": 4, 00:31:03.326 "num_base_bdevs_operational": 4, 00:31:03.326 "process": { 00:31:03.326 "type": "rebuild", 00:31:03.326 "target": "spare", 00:31:03.326 "progress": { 00:31:03.326 "blocks": 17280, 00:31:03.326 "percent": 8 00:31:03.326 } 00:31:03.326 }, 00:31:03.326 "base_bdevs_list": [ 00:31:03.326 { 00:31:03.326 "name": "spare", 00:31:03.326 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:03.326 "is_configured": true, 00:31:03.326 "data_offset": 0, 00:31:03.326 "data_size": 65536 00:31:03.326 }, 00:31:03.326 { 00:31:03.326 "name": "BaseBdev2", 00:31:03.326 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:03.326 "is_configured": true, 00:31:03.326 "data_offset": 0, 00:31:03.326 "data_size": 65536 00:31:03.326 }, 00:31:03.326 { 00:31:03.326 "name": "BaseBdev3", 00:31:03.326 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:03.326 "is_configured": true, 00:31:03.326 "data_offset": 0, 00:31:03.326 "data_size": 65536 00:31:03.326 }, 00:31:03.326 { 00:31:03.326 "name": "BaseBdev4", 00:31:03.326 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:03.326 "is_configured": true, 00:31:03.326 "data_offset": 0, 00:31:03.326 "data_size": 65536 00:31:03.326 } 00:31:03.326 ] 00:31:03.326 }' 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:03.326 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.586 [2024-11-20 07:28:27.628407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:03.586 [2024-11-20 07:28:27.634470] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:03.586 [2024-11-20 07:28:27.634570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.586 [2024-11-20 07:28:27.634645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:03.586 [2024-11-20 07:28:27.634664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.586 "name": "raid_bdev1", 00:31:03.586 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:03.586 "strip_size_kb": 64, 00:31:03.586 "state": "online", 00:31:03.586 "raid_level": "raid5f", 00:31:03.586 "superblock": false, 00:31:03.586 "num_base_bdevs": 4, 00:31:03.586 "num_base_bdevs_discovered": 3, 00:31:03.586 "num_base_bdevs_operational": 3, 00:31:03.586 "base_bdevs_list": [ 00:31:03.586 { 00:31:03.586 "name": null, 00:31:03.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.586 "is_configured": false, 00:31:03.586 "data_offset": 0, 00:31:03.586 "data_size": 65536 00:31:03.586 }, 00:31:03.586 { 00:31:03.586 "name": "BaseBdev2", 00:31:03.586 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:03.586 "is_configured": true, 00:31:03.586 "data_offset": 0, 00:31:03.586 "data_size": 65536 00:31:03.586 }, 00:31:03.586 { 00:31:03.586 "name": "BaseBdev3", 00:31:03.586 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:03.586 "is_configured": true, 00:31:03.586 "data_offset": 0, 00:31:03.586 "data_size": 65536 00:31:03.586 }, 00:31:03.586 { 00:31:03.586 "name": "BaseBdev4", 00:31:03.586 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:03.586 "is_configured": true, 00:31:03.586 "data_offset": 0, 00:31:03.586 "data_size": 65536 00:31:03.586 } 00:31:03.586 ] 00:31:03.586 }' 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.586 07:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.154 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:04.154 "name": "raid_bdev1", 00:31:04.154 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:04.154 "strip_size_kb": 64, 00:31:04.154 "state": "online", 00:31:04.154 "raid_level": "raid5f", 00:31:04.154 "superblock": false, 00:31:04.154 "num_base_bdevs": 4, 00:31:04.154 "num_base_bdevs_discovered": 3, 00:31:04.154 "num_base_bdevs_operational": 3, 00:31:04.154 "base_bdevs_list": [ 00:31:04.154 { 00:31:04.154 "name": null, 00:31:04.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.154 "is_configured": false, 00:31:04.154 "data_offset": 0, 00:31:04.154 "data_size": 65536 00:31:04.154 }, 00:31:04.154 { 00:31:04.154 "name": "BaseBdev2", 00:31:04.154 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:04.154 "is_configured": true, 00:31:04.154 "data_offset": 0, 00:31:04.154 "data_size": 65536 00:31:04.154 }, 00:31:04.154 { 00:31:04.155 "name": "BaseBdev3", 00:31:04.155 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:04.155 "is_configured": true, 00:31:04.155 "data_offset": 0, 00:31:04.155 "data_size": 65536 00:31:04.155 }, 00:31:04.155 { 00:31:04.155 "name": "BaseBdev4", 00:31:04.155 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:04.155 "is_configured": true, 00:31:04.155 "data_offset": 0, 00:31:04.155 "data_size": 65536 00:31:04.155 } 00:31:04.155 ] 00:31:04.155 }' 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 [2024-11-20 07:28:28.365417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:04.155 [2024-11-20 07:28:28.378579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.155 07:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:04.155 [2024-11-20 07:28:28.386887] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:05.530 "name": "raid_bdev1", 00:31:05.530 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:05.530 "strip_size_kb": 64, 00:31:05.530 "state": "online", 00:31:05.530 "raid_level": "raid5f", 00:31:05.530 "superblock": false, 00:31:05.530 "num_base_bdevs": 4, 00:31:05.530 "num_base_bdevs_discovered": 4, 00:31:05.530 "num_base_bdevs_operational": 4, 00:31:05.530 "process": { 00:31:05.530 "type": "rebuild", 00:31:05.530 "target": "spare", 00:31:05.530 "progress": { 00:31:05.530 "blocks": 17280, 00:31:05.530 "percent": 8 00:31:05.530 } 00:31:05.530 }, 00:31:05.530 "base_bdevs_list": [ 00:31:05.530 { 00:31:05.530 "name": "spare", 00:31:05.530 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:05.530 "is_configured": true, 00:31:05.530 "data_offset": 0, 00:31:05.530 "data_size": 65536 00:31:05.530 }, 00:31:05.530 { 00:31:05.530 "name": "BaseBdev2", 00:31:05.530 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:05.530 "is_configured": true, 00:31:05.530 "data_offset": 0, 00:31:05.530 "data_size": 65536 00:31:05.530 }, 00:31:05.530 { 00:31:05.530 "name": "BaseBdev3", 00:31:05.530 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:05.530 "is_configured": true, 00:31:05.530 "data_offset": 0, 00:31:05.530 "data_size": 65536 00:31:05.530 }, 00:31:05.530 { 00:31:05.530 "name": "BaseBdev4", 00:31:05.530 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:05.530 "is_configured": true, 00:31:05.530 "data_offset": 0, 00:31:05.530 "data_size": 65536 00:31:05.530 } 00:31:05.530 ] 00:31:05.530 }' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.530 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:05.530 "name": "raid_bdev1", 00:31:05.530 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:05.530 "strip_size_kb": 64, 00:31:05.530 "state": "online", 00:31:05.530 "raid_level": "raid5f", 00:31:05.530 "superblock": false, 00:31:05.530 "num_base_bdevs": 4, 00:31:05.530 "num_base_bdevs_discovered": 4, 00:31:05.530 "num_base_bdevs_operational": 4, 00:31:05.530 "process": { 00:31:05.530 "type": "rebuild", 00:31:05.531 "target": "spare", 00:31:05.531 "progress": { 00:31:05.531 "blocks": 21120, 00:31:05.531 "percent": 10 00:31:05.531 } 00:31:05.531 }, 00:31:05.531 "base_bdevs_list": [ 00:31:05.531 { 00:31:05.531 "name": "spare", 00:31:05.531 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:05.531 "is_configured": true, 00:31:05.531 "data_offset": 0, 00:31:05.531 "data_size": 65536 00:31:05.531 }, 00:31:05.531 { 00:31:05.531 "name": "BaseBdev2", 00:31:05.531 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:05.531 "is_configured": true, 00:31:05.531 "data_offset": 0, 00:31:05.531 "data_size": 65536 00:31:05.531 }, 00:31:05.531 { 00:31:05.531 "name": "BaseBdev3", 00:31:05.531 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:05.531 "is_configured": true, 00:31:05.531 "data_offset": 0, 00:31:05.531 "data_size": 65536 00:31:05.531 }, 00:31:05.531 { 00:31:05.531 "name": "BaseBdev4", 00:31:05.531 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:05.531 "is_configured": true, 00:31:05.531 "data_offset": 0, 00:31:05.531 "data_size": 65536 00:31:05.531 } 00:31:05.531 ] 00:31:05.531 }' 00:31:05.531 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:05.531 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:05.531 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:05.531 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:05.531 07:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.466 07:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.725 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:06.725 "name": "raid_bdev1", 00:31:06.725 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:06.725 "strip_size_kb": 64, 00:31:06.725 "state": "online", 00:31:06.725 "raid_level": "raid5f", 00:31:06.725 "superblock": false, 00:31:06.725 "num_base_bdevs": 4, 00:31:06.725 "num_base_bdevs_discovered": 4, 00:31:06.725 "num_base_bdevs_operational": 4, 00:31:06.725 "process": { 00:31:06.725 "type": "rebuild", 00:31:06.725 "target": "spare", 00:31:06.725 "progress": { 00:31:06.725 "blocks": 44160, 00:31:06.725 "percent": 22 00:31:06.725 } 00:31:06.725 }, 00:31:06.725 "base_bdevs_list": [ 00:31:06.725 { 00:31:06.725 "name": "spare", 00:31:06.725 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:06.725 "is_configured": true, 00:31:06.725 "data_offset": 0, 00:31:06.725 "data_size": 65536 00:31:06.725 }, 00:31:06.725 { 00:31:06.725 "name": "BaseBdev2", 00:31:06.725 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:06.725 "is_configured": true, 00:31:06.725 "data_offset": 0, 00:31:06.725 "data_size": 65536 00:31:06.725 }, 00:31:06.725 { 00:31:06.725 "name": "BaseBdev3", 00:31:06.725 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:06.725 "is_configured": true, 00:31:06.725 "data_offset": 0, 00:31:06.725 "data_size": 65536 00:31:06.725 }, 00:31:06.725 { 00:31:06.725 "name": "BaseBdev4", 00:31:06.725 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:06.725 "is_configured": true, 00:31:06.725 "data_offset": 0, 00:31:06.725 "data_size": 65536 00:31:06.725 } 00:31:06.725 ] 00:31:06.725 }' 00:31:06.725 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:06.725 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:06.725 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:06.725 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:06.725 07:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:07.660 "name": "raid_bdev1", 00:31:07.660 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:07.660 "strip_size_kb": 64, 00:31:07.660 "state": "online", 00:31:07.660 "raid_level": "raid5f", 00:31:07.660 "superblock": false, 00:31:07.660 "num_base_bdevs": 4, 00:31:07.660 "num_base_bdevs_discovered": 4, 00:31:07.660 "num_base_bdevs_operational": 4, 00:31:07.660 "process": { 00:31:07.660 "type": "rebuild", 00:31:07.660 "target": "spare", 00:31:07.660 "progress": { 00:31:07.660 "blocks": 65280, 00:31:07.660 "percent": 33 00:31:07.660 } 00:31:07.660 }, 00:31:07.660 "base_bdevs_list": [ 00:31:07.660 { 00:31:07.660 "name": "spare", 00:31:07.660 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:07.660 "is_configured": true, 00:31:07.660 "data_offset": 0, 00:31:07.660 "data_size": 65536 00:31:07.660 }, 00:31:07.660 { 00:31:07.660 "name": "BaseBdev2", 00:31:07.660 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:07.660 "is_configured": true, 00:31:07.660 "data_offset": 0, 00:31:07.660 "data_size": 65536 00:31:07.660 }, 00:31:07.660 { 00:31:07.660 "name": "BaseBdev3", 00:31:07.660 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:07.660 "is_configured": true, 00:31:07.660 "data_offset": 0, 00:31:07.660 "data_size": 65536 00:31:07.660 }, 00:31:07.660 { 00:31:07.660 "name": "BaseBdev4", 00:31:07.660 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:07.660 "is_configured": true, 00:31:07.660 "data_offset": 0, 00:31:07.660 "data_size": 65536 00:31:07.660 } 00:31:07.660 ] 00:31:07.660 }' 00:31:07.660 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:07.918 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:07.918 07:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:07.918 07:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:07.918 07:28:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:08.851 "name": "raid_bdev1", 00:31:08.851 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:08.851 "strip_size_kb": 64, 00:31:08.851 "state": "online", 00:31:08.851 "raid_level": "raid5f", 00:31:08.851 "superblock": false, 00:31:08.851 "num_base_bdevs": 4, 00:31:08.851 "num_base_bdevs_discovered": 4, 00:31:08.851 "num_base_bdevs_operational": 4, 00:31:08.851 "process": { 00:31:08.851 "type": "rebuild", 00:31:08.851 "target": "spare", 00:31:08.851 "progress": { 00:31:08.851 "blocks": 88320, 00:31:08.851 "percent": 44 00:31:08.851 } 00:31:08.851 }, 00:31:08.851 "base_bdevs_list": [ 00:31:08.851 { 00:31:08.851 "name": "spare", 00:31:08.851 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:08.851 "is_configured": true, 00:31:08.851 "data_offset": 0, 00:31:08.851 "data_size": 65536 00:31:08.851 }, 00:31:08.851 { 00:31:08.851 "name": "BaseBdev2", 00:31:08.851 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:08.851 "is_configured": true, 00:31:08.851 "data_offset": 0, 00:31:08.851 "data_size": 65536 00:31:08.851 }, 00:31:08.851 { 00:31:08.851 "name": "BaseBdev3", 00:31:08.851 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:08.851 "is_configured": true, 00:31:08.851 "data_offset": 0, 00:31:08.851 "data_size": 65536 00:31:08.851 }, 00:31:08.851 { 00:31:08.851 "name": "BaseBdev4", 00:31:08.851 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:08.851 "is_configured": true, 00:31:08.851 "data_offset": 0, 00:31:08.851 "data_size": 65536 00:31:08.851 } 00:31:08.851 ] 00:31:08.851 }' 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:08.851 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:09.110 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:09.110 07:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:10.044 "name": "raid_bdev1", 00:31:10.044 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:10.044 "strip_size_kb": 64, 00:31:10.044 "state": "online", 00:31:10.044 "raid_level": "raid5f", 00:31:10.044 "superblock": false, 00:31:10.044 "num_base_bdevs": 4, 00:31:10.044 "num_base_bdevs_discovered": 4, 00:31:10.044 "num_base_bdevs_operational": 4, 00:31:10.044 "process": { 00:31:10.044 "type": "rebuild", 00:31:10.044 "target": "spare", 00:31:10.044 "progress": { 00:31:10.044 "blocks": 109440, 00:31:10.044 "percent": 55 00:31:10.044 } 00:31:10.044 }, 00:31:10.044 "base_bdevs_list": [ 00:31:10.044 { 00:31:10.044 "name": "spare", 00:31:10.044 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:10.044 "is_configured": true, 00:31:10.044 "data_offset": 0, 00:31:10.044 "data_size": 65536 00:31:10.044 }, 00:31:10.044 { 00:31:10.044 "name": "BaseBdev2", 00:31:10.044 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:10.044 "is_configured": true, 00:31:10.044 "data_offset": 0, 00:31:10.044 "data_size": 65536 00:31:10.044 }, 00:31:10.044 { 00:31:10.044 "name": "BaseBdev3", 00:31:10.044 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:10.044 "is_configured": true, 00:31:10.044 "data_offset": 0, 00:31:10.044 "data_size": 65536 00:31:10.044 }, 00:31:10.044 { 00:31:10.044 "name": "BaseBdev4", 00:31:10.044 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:10.044 "is_configured": true, 00:31:10.044 "data_offset": 0, 00:31:10.044 "data_size": 65536 00:31:10.044 } 00:31:10.044 ] 00:31:10.044 }' 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:10.044 07:28:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:11.419 "name": "raid_bdev1", 00:31:11.419 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:11.419 "strip_size_kb": 64, 00:31:11.419 "state": "online", 00:31:11.419 "raid_level": "raid5f", 00:31:11.419 "superblock": false, 00:31:11.419 "num_base_bdevs": 4, 00:31:11.419 "num_base_bdevs_discovered": 4, 00:31:11.419 "num_base_bdevs_operational": 4, 00:31:11.419 "process": { 00:31:11.419 "type": "rebuild", 00:31:11.419 "target": "spare", 00:31:11.419 "progress": { 00:31:11.419 "blocks": 130560, 00:31:11.419 "percent": 66 00:31:11.419 } 00:31:11.419 }, 00:31:11.419 "base_bdevs_list": [ 00:31:11.419 { 00:31:11.419 "name": "spare", 00:31:11.419 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:11.419 "is_configured": true, 00:31:11.419 "data_offset": 0, 00:31:11.419 "data_size": 65536 00:31:11.419 }, 00:31:11.419 { 00:31:11.419 "name": "BaseBdev2", 00:31:11.419 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:11.419 "is_configured": true, 00:31:11.419 "data_offset": 0, 00:31:11.419 "data_size": 65536 00:31:11.419 }, 00:31:11.419 { 00:31:11.419 "name": "BaseBdev3", 00:31:11.419 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:11.419 "is_configured": true, 00:31:11.419 "data_offset": 0, 00:31:11.419 "data_size": 65536 00:31:11.419 }, 00:31:11.419 { 00:31:11.419 "name": "BaseBdev4", 00:31:11.419 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:11.419 "is_configured": true, 00:31:11.419 "data_offset": 0, 00:31:11.419 "data_size": 65536 00:31:11.419 } 00:31:11.419 ] 00:31:11.419 }' 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:11.419 07:28:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.354 07:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.355 07:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.355 07:28:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.355 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:12.355 "name": "raid_bdev1", 00:31:12.355 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:12.355 "strip_size_kb": 64, 00:31:12.355 "state": "online", 00:31:12.355 "raid_level": "raid5f", 00:31:12.355 "superblock": false, 00:31:12.355 "num_base_bdevs": 4, 00:31:12.355 "num_base_bdevs_discovered": 4, 00:31:12.355 "num_base_bdevs_operational": 4, 00:31:12.355 "process": { 00:31:12.355 "type": "rebuild", 00:31:12.355 "target": "spare", 00:31:12.355 "progress": { 00:31:12.355 "blocks": 153600, 00:31:12.355 "percent": 78 00:31:12.355 } 00:31:12.355 }, 00:31:12.355 "base_bdevs_list": [ 00:31:12.355 { 00:31:12.355 "name": "spare", 00:31:12.355 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:12.355 "is_configured": true, 00:31:12.355 "data_offset": 0, 00:31:12.355 "data_size": 65536 00:31:12.355 }, 00:31:12.355 { 00:31:12.355 "name": "BaseBdev2", 00:31:12.355 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:12.355 "is_configured": true, 00:31:12.355 "data_offset": 0, 00:31:12.355 "data_size": 65536 00:31:12.355 }, 00:31:12.355 { 00:31:12.355 "name": "BaseBdev3", 00:31:12.355 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:12.355 "is_configured": true, 00:31:12.355 "data_offset": 0, 00:31:12.355 "data_size": 65536 00:31:12.355 }, 00:31:12.355 { 00:31:12.355 "name": "BaseBdev4", 00:31:12.355 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:12.355 "is_configured": true, 00:31:12.355 "data_offset": 0, 00:31:12.355 "data_size": 65536 00:31:12.355 } 00:31:12.355 ] 00:31:12.355 }' 00:31:12.355 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:12.355 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:12.355 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:12.614 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:12.614 07:28:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:13.556 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:13.557 "name": "raid_bdev1", 00:31:13.557 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:13.557 "strip_size_kb": 64, 00:31:13.557 "state": "online", 00:31:13.557 "raid_level": "raid5f", 00:31:13.557 "superblock": false, 00:31:13.557 "num_base_bdevs": 4, 00:31:13.557 "num_base_bdevs_discovered": 4, 00:31:13.557 "num_base_bdevs_operational": 4, 00:31:13.557 "process": { 00:31:13.557 "type": "rebuild", 00:31:13.557 "target": "spare", 00:31:13.557 "progress": { 00:31:13.557 "blocks": 174720, 00:31:13.557 "percent": 88 00:31:13.557 } 00:31:13.557 }, 00:31:13.557 "base_bdevs_list": [ 00:31:13.557 { 00:31:13.557 "name": "spare", 00:31:13.557 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:13.557 "is_configured": true, 00:31:13.557 "data_offset": 0, 00:31:13.557 "data_size": 65536 00:31:13.557 }, 00:31:13.557 { 00:31:13.557 "name": "BaseBdev2", 00:31:13.557 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:13.557 "is_configured": true, 00:31:13.557 "data_offset": 0, 00:31:13.557 "data_size": 65536 00:31:13.557 }, 00:31:13.557 { 00:31:13.557 "name": "BaseBdev3", 00:31:13.557 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:13.557 "is_configured": true, 00:31:13.557 "data_offset": 0, 00:31:13.557 "data_size": 65536 00:31:13.557 }, 00:31:13.557 { 00:31:13.557 "name": "BaseBdev4", 00:31:13.557 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:13.557 "is_configured": true, 00:31:13.557 "data_offset": 0, 00:31:13.557 "data_size": 65536 00:31:13.557 } 00:31:13.557 ] 00:31:13.557 }' 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:13.557 07:28:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:14.939 [2024-11-20 07:28:38.785565] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:14.939 [2024-11-20 07:28:38.785700] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:14.939 [2024-11-20 07:28:38.785766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.939 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:14.939 "name": "raid_bdev1", 00:31:14.939 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:14.939 "strip_size_kb": 64, 00:31:14.939 "state": "online", 00:31:14.940 "raid_level": "raid5f", 00:31:14.940 "superblock": false, 00:31:14.940 "num_base_bdevs": 4, 00:31:14.940 "num_base_bdevs_discovered": 4, 00:31:14.940 "num_base_bdevs_operational": 4, 00:31:14.940 "base_bdevs_list": [ 00:31:14.940 { 00:31:14.940 "name": "spare", 00:31:14.940 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev2", 00:31:14.940 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev3", 00:31:14.940 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev4", 00:31:14.940 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 } 00:31:14.940 ] 00:31:14.940 }' 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.940 07:28:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:14.940 "name": "raid_bdev1", 00:31:14.940 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:14.940 "strip_size_kb": 64, 00:31:14.940 "state": "online", 00:31:14.940 "raid_level": "raid5f", 00:31:14.940 "superblock": false, 00:31:14.940 "num_base_bdevs": 4, 00:31:14.940 "num_base_bdevs_discovered": 4, 00:31:14.940 "num_base_bdevs_operational": 4, 00:31:14.940 "base_bdevs_list": [ 00:31:14.940 { 00:31:14.940 "name": "spare", 00:31:14.940 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev2", 00:31:14.940 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev3", 00:31:14.940 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev4", 00:31:14.940 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 } 00:31:14.940 ] 00:31:14.940 }' 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.940 "name": "raid_bdev1", 00:31:14.940 "uuid": "f9df9479-602e-42a5-b183-af0f8c534515", 00:31:14.940 "strip_size_kb": 64, 00:31:14.940 "state": "online", 00:31:14.940 "raid_level": "raid5f", 00:31:14.940 "superblock": false, 00:31:14.940 "num_base_bdevs": 4, 00:31:14.940 "num_base_bdevs_discovered": 4, 00:31:14.940 "num_base_bdevs_operational": 4, 00:31:14.940 "base_bdevs_list": [ 00:31:14.940 { 00:31:14.940 "name": "spare", 00:31:14.940 "uuid": "08e70ed9-fd64-5544-83d6-d8535116590a", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev2", 00:31:14.940 "uuid": "37e3c19f-a56d-5a7a-aa00-f540fe826523", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev3", 00:31:14.940 "uuid": "dc937ec2-c11f-54c3-857a-b10e17958d30", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 }, 00:31:14.940 { 00:31:14.940 "name": "BaseBdev4", 00:31:14.940 "uuid": "42d55fb6-a201-5946-9576-76990971d368", 00:31:14.940 "is_configured": true, 00:31:14.940 "data_offset": 0, 00:31:14.940 "data_size": 65536 00:31:14.940 } 00:31:14.940 ] 00:31:14.940 }' 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.940 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.508 [2024-11-20 07:28:39.666589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:15.508 [2024-11-20 07:28:39.666669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:15.508 [2024-11-20 07:28:39.666777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:15.508 [2024-11-20 07:28:39.666898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:15.508 [2024-11-20 07:28:39.666915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:15.508 07:28:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:15.767 /dev/nbd0 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:16.026 1+0 records in 00:31:16.026 1+0 records out 00:31:16.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390378 s, 10.5 MB/s 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:16.026 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:16.285 /dev/nbd1 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:16.285 1+0 records in 00:31:16.285 1+0 records out 00:31:16.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394275 s, 10.4 MB/s 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:16.285 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:16.852 07:28:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85203 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85203 ']' 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85203 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85203 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:17.111 killing process with pid 85203 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85203' 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85203 00:31:17.111 Received shutdown signal, test time was about 60.000000 seconds 00:31:17.111 00:31:17.111 Latency(us) 00:31:17.111 [2024-11-20T07:28:41.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.111 [2024-11-20T07:28:41.400Z] =================================================================================================================== 00:31:17.111 [2024-11-20T07:28:41.400Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:17.111 [2024-11-20 07:28:41.187234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:17.111 07:28:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85203 00:31:17.370 [2024-11-20 07:28:41.593714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:31:18.304 00:31:18.304 real 0m19.971s 00:31:18.304 user 0m24.806s 00:31:18.304 sys 0m2.379s 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.304 ************************************ 00:31:18.304 END TEST raid5f_rebuild_test 00:31:18.304 ************************************ 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.304 07:28:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:31:18.304 07:28:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:18.304 07:28:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.304 07:28:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:18.304 ************************************ 00:31:18.304 START TEST raid5f_rebuild_test_sb 00:31:18.304 ************************************ 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.304 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.305 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:18.305 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.305 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.305 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:18.305 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.305 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85707 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85707 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85707 ']' 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.563 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.564 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.564 07:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:18.564 [2024-11-20 07:28:42.700171] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:18.564 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:18.564 Zero copy mechanism will not be used. 00:31:18.564 [2024-11-20 07:28:42.700378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85707 ] 00:31:18.822 [2024-11-20 07:28:42.887473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.822 [2024-11-20 07:28:43.015161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.122 [2024-11-20 07:28:43.193455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:19.122 [2024-11-20 07:28:43.193550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.389 BaseBdev1_malloc 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.389 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.389 [2024-11-20 07:28:43.677184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:19.389 [2024-11-20 07:28:43.677276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.389 [2024-11-20 07:28:43.677319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:19.389 [2024-11-20 07:28:43.677335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.648 [2024-11-20 07:28:43.680244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.648 [2024-11-20 07:28:43.680326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:19.648 BaseBdev1 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.648 BaseBdev2_malloc 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.648 [2024-11-20 07:28:43.725058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:19.648 [2024-11-20 07:28:43.725136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.648 [2024-11-20 07:28:43.725160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:19.648 [2024-11-20 07:28:43.725177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.648 [2024-11-20 07:28:43.727742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.648 [2024-11-20 07:28:43.727801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:19.648 BaseBdev2 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.648 BaseBdev3_malloc 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.648 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.648 [2024-11-20 07:28:43.783047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:19.648 [2024-11-20 07:28:43.783122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.648 [2024-11-20 07:28:43.783148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:19.648 [2024-11-20 07:28:43.783164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.648 [2024-11-20 07:28:43.785656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.648 [2024-11-20 07:28:43.785716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:19.649 BaseBdev3 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.649 BaseBdev4_malloc 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.649 [2024-11-20 07:28:43.834063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:19.649 [2024-11-20 07:28:43.834167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.649 [2024-11-20 07:28:43.834194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:19.649 [2024-11-20 07:28:43.834211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.649 [2024-11-20 07:28:43.837085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.649 [2024-11-20 07:28:43.837163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:19.649 BaseBdev4 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.649 spare_malloc 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.649 spare_delay 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.649 [2024-11-20 07:28:43.894042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:19.649 [2024-11-20 07:28:43.894137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.649 [2024-11-20 07:28:43.894165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:31:19.649 [2024-11-20 07:28:43.894182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.649 [2024-11-20 07:28:43.897186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.649 [2024-11-20 07:28:43.897247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:19.649 spare 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.649 [2024-11-20 07:28:43.902158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:19.649 [2024-11-20 07:28:43.904736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:19.649 [2024-11-20 07:28:43.904854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:19.649 [2024-11-20 07:28:43.904971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:19.649 [2024-11-20 07:28:43.905291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:19.649 [2024-11-20 07:28:43.905323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:19.649 [2024-11-20 07:28:43.905657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:19.649 [2024-11-20 07:28:43.912629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:19.649 [2024-11-20 07:28:43.912670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:19.649 [2024-11-20 07:28:43.912906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.649 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:19.908 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.908 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.908 "name": "raid_bdev1", 00:31:19.908 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:19.908 "strip_size_kb": 64, 00:31:19.908 "state": "online", 00:31:19.908 "raid_level": "raid5f", 00:31:19.908 "superblock": true, 00:31:19.908 "num_base_bdevs": 4, 00:31:19.908 "num_base_bdevs_discovered": 4, 00:31:19.908 "num_base_bdevs_operational": 4, 00:31:19.908 "base_bdevs_list": [ 00:31:19.908 { 00:31:19.908 "name": "BaseBdev1", 00:31:19.908 "uuid": "2aa39699-7829-530e-9614-f91a5715c8d0", 00:31:19.908 "is_configured": true, 00:31:19.908 "data_offset": 2048, 00:31:19.908 "data_size": 63488 00:31:19.908 }, 00:31:19.908 { 00:31:19.908 "name": "BaseBdev2", 00:31:19.908 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:19.908 "is_configured": true, 00:31:19.908 "data_offset": 2048, 00:31:19.908 "data_size": 63488 00:31:19.908 }, 00:31:19.908 { 00:31:19.908 "name": "BaseBdev3", 00:31:19.908 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:19.908 "is_configured": true, 00:31:19.908 "data_offset": 2048, 00:31:19.908 "data_size": 63488 00:31:19.908 }, 00:31:19.908 { 00:31:19.908 "name": "BaseBdev4", 00:31:19.908 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:19.908 "is_configured": true, 00:31:19.908 "data_offset": 2048, 00:31:19.908 "data_size": 63488 00:31:19.908 } 00:31:19.908 ] 00:31:19.908 }' 00:31:19.908 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.908 07:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.167 [2024-11-20 07:28:44.373030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.167 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.425 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:20.685 [2024-11-20 07:28:44.760948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:20.685 /dev/nbd0 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:20.685 1+0 records in 00:31:20.685 1+0 records out 00:31:20.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240786 s, 17.0 MB/s 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:31:20.685 07:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:31:21.253 496+0 records in 00:31:21.253 496+0 records out 00:31:21.253 97517568 bytes (98 MB, 93 MiB) copied, 0.59145 s, 165 MB/s 00:31:21.253 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:21.254 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:21.254 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:21.254 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:21.254 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:21.254 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:21.254 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:21.512 [2024-11-20 07:28:45.673006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.512 [2024-11-20 07:28:45.680982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:21.512 "name": "raid_bdev1", 00:31:21.512 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:21.512 "strip_size_kb": 64, 00:31:21.512 "state": "online", 00:31:21.512 "raid_level": "raid5f", 00:31:21.512 "superblock": true, 00:31:21.512 "num_base_bdevs": 4, 00:31:21.512 "num_base_bdevs_discovered": 3, 00:31:21.512 "num_base_bdevs_operational": 3, 00:31:21.512 "base_bdevs_list": [ 00:31:21.512 { 00:31:21.512 "name": null, 00:31:21.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.512 "is_configured": false, 00:31:21.512 "data_offset": 0, 00:31:21.512 "data_size": 63488 00:31:21.512 }, 00:31:21.512 { 00:31:21.512 "name": "BaseBdev2", 00:31:21.512 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:21.512 "is_configured": true, 00:31:21.512 "data_offset": 2048, 00:31:21.512 "data_size": 63488 00:31:21.512 }, 00:31:21.512 { 00:31:21.512 "name": "BaseBdev3", 00:31:21.512 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:21.512 "is_configured": true, 00:31:21.512 "data_offset": 2048, 00:31:21.512 "data_size": 63488 00:31:21.512 }, 00:31:21.512 { 00:31:21.512 "name": "BaseBdev4", 00:31:21.512 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:21.512 "is_configured": true, 00:31:21.512 "data_offset": 2048, 00:31:21.512 "data_size": 63488 00:31:21.512 } 00:31:21.512 ] 00:31:21.512 }' 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:21.512 07:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.079 07:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:22.079 07:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.079 07:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.079 [2024-11-20 07:28:46.217151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:22.079 [2024-11-20 07:28:46.230714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:31:22.079 07:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.079 07:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:22.079 [2024-11-20 07:28:46.239225] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.012 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.013 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.013 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:23.013 "name": "raid_bdev1", 00:31:23.013 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:23.013 "strip_size_kb": 64, 00:31:23.013 "state": "online", 00:31:23.013 "raid_level": "raid5f", 00:31:23.013 "superblock": true, 00:31:23.013 "num_base_bdevs": 4, 00:31:23.013 "num_base_bdevs_discovered": 4, 00:31:23.013 "num_base_bdevs_operational": 4, 00:31:23.013 "process": { 00:31:23.013 "type": "rebuild", 00:31:23.013 "target": "spare", 00:31:23.013 "progress": { 00:31:23.013 "blocks": 17280, 00:31:23.013 "percent": 9 00:31:23.013 } 00:31:23.013 }, 00:31:23.013 "base_bdevs_list": [ 00:31:23.013 { 00:31:23.013 "name": "spare", 00:31:23.013 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:23.013 "is_configured": true, 00:31:23.013 "data_offset": 2048, 00:31:23.013 "data_size": 63488 00:31:23.013 }, 00:31:23.013 { 00:31:23.013 "name": "BaseBdev2", 00:31:23.013 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:23.013 "is_configured": true, 00:31:23.013 "data_offset": 2048, 00:31:23.013 "data_size": 63488 00:31:23.013 }, 00:31:23.013 { 00:31:23.013 "name": "BaseBdev3", 00:31:23.013 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:23.013 "is_configured": true, 00:31:23.013 "data_offset": 2048, 00:31:23.013 "data_size": 63488 00:31:23.013 }, 00:31:23.013 { 00:31:23.013 "name": "BaseBdev4", 00:31:23.013 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:23.013 "is_configured": true, 00:31:23.013 "data_offset": 2048, 00:31:23.013 "data_size": 63488 00:31:23.013 } 00:31:23.013 ] 00:31:23.013 }' 00:31:23.013 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.271 [2024-11-20 07:28:47.392587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:23.271 [2024-11-20 07:28:47.450655] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:23.271 [2024-11-20 07:28:47.450763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.271 [2024-11-20 07:28:47.450787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:23.271 [2024-11-20 07:28:47.450801] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:23.271 "name": "raid_bdev1", 00:31:23.271 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:23.271 "strip_size_kb": 64, 00:31:23.271 "state": "online", 00:31:23.271 "raid_level": "raid5f", 00:31:23.271 "superblock": true, 00:31:23.271 "num_base_bdevs": 4, 00:31:23.271 "num_base_bdevs_discovered": 3, 00:31:23.271 "num_base_bdevs_operational": 3, 00:31:23.271 "base_bdevs_list": [ 00:31:23.271 { 00:31:23.271 "name": null, 00:31:23.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.271 "is_configured": false, 00:31:23.271 "data_offset": 0, 00:31:23.271 "data_size": 63488 00:31:23.271 }, 00:31:23.271 { 00:31:23.271 "name": "BaseBdev2", 00:31:23.271 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:23.271 "is_configured": true, 00:31:23.271 "data_offset": 2048, 00:31:23.271 "data_size": 63488 00:31:23.271 }, 00:31:23.271 { 00:31:23.271 "name": "BaseBdev3", 00:31:23.271 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:23.271 "is_configured": true, 00:31:23.271 "data_offset": 2048, 00:31:23.271 "data_size": 63488 00:31:23.271 }, 00:31:23.271 { 00:31:23.271 "name": "BaseBdev4", 00:31:23.271 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:23.271 "is_configured": true, 00:31:23.271 "data_offset": 2048, 00:31:23.271 "data_size": 63488 00:31:23.271 } 00:31:23.271 ] 00:31:23.271 }' 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:23.271 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:23.871 "name": "raid_bdev1", 00:31:23.871 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:23.871 "strip_size_kb": 64, 00:31:23.871 "state": "online", 00:31:23.871 "raid_level": "raid5f", 00:31:23.871 "superblock": true, 00:31:23.871 "num_base_bdevs": 4, 00:31:23.871 "num_base_bdevs_discovered": 3, 00:31:23.871 "num_base_bdevs_operational": 3, 00:31:23.871 "base_bdevs_list": [ 00:31:23.871 { 00:31:23.871 "name": null, 00:31:23.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.871 "is_configured": false, 00:31:23.871 "data_offset": 0, 00:31:23.871 "data_size": 63488 00:31:23.871 }, 00:31:23.871 { 00:31:23.871 "name": "BaseBdev2", 00:31:23.871 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:23.871 "is_configured": true, 00:31:23.871 "data_offset": 2048, 00:31:23.871 "data_size": 63488 00:31:23.871 }, 00:31:23.871 { 00:31:23.871 "name": "BaseBdev3", 00:31:23.871 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:23.871 "is_configured": true, 00:31:23.871 "data_offset": 2048, 00:31:23.871 "data_size": 63488 00:31:23.871 }, 00:31:23.871 { 00:31:23.871 "name": "BaseBdev4", 00:31:23.871 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:23.871 "is_configured": true, 00:31:23.871 "data_offset": 2048, 00:31:23.871 "data_size": 63488 00:31:23.871 } 00:31:23.871 ] 00:31:23.871 }' 00:31:23.871 07:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.871 [2024-11-20 07:28:48.078175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:23.871 [2024-11-20 07:28:48.089953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.871 07:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:23.871 [2024-11-20 07:28:48.098066] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:24.805 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:24.805 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:24.805 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:24.805 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:24.805 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:25.064 "name": "raid_bdev1", 00:31:25.064 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:25.064 "strip_size_kb": 64, 00:31:25.064 "state": "online", 00:31:25.064 "raid_level": "raid5f", 00:31:25.064 "superblock": true, 00:31:25.064 "num_base_bdevs": 4, 00:31:25.064 "num_base_bdevs_discovered": 4, 00:31:25.064 "num_base_bdevs_operational": 4, 00:31:25.064 "process": { 00:31:25.064 "type": "rebuild", 00:31:25.064 "target": "spare", 00:31:25.064 "progress": { 00:31:25.064 "blocks": 17280, 00:31:25.064 "percent": 9 00:31:25.064 } 00:31:25.064 }, 00:31:25.064 "base_bdevs_list": [ 00:31:25.064 { 00:31:25.064 "name": "spare", 00:31:25.064 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 }, 00:31:25.064 { 00:31:25.064 "name": "BaseBdev2", 00:31:25.064 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 }, 00:31:25.064 { 00:31:25.064 "name": "BaseBdev3", 00:31:25.064 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 }, 00:31:25.064 { 00:31:25.064 "name": "BaseBdev4", 00:31:25.064 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 } 00:31:25.064 ] 00:31:25.064 }' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:31:25.064 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=693 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:25.064 "name": "raid_bdev1", 00:31:25.064 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:25.064 "strip_size_kb": 64, 00:31:25.064 "state": "online", 00:31:25.064 "raid_level": "raid5f", 00:31:25.064 "superblock": true, 00:31:25.064 "num_base_bdevs": 4, 00:31:25.064 "num_base_bdevs_discovered": 4, 00:31:25.064 "num_base_bdevs_operational": 4, 00:31:25.064 "process": { 00:31:25.064 "type": "rebuild", 00:31:25.064 "target": "spare", 00:31:25.064 "progress": { 00:31:25.064 "blocks": 21120, 00:31:25.064 "percent": 11 00:31:25.064 } 00:31:25.064 }, 00:31:25.064 "base_bdevs_list": [ 00:31:25.064 { 00:31:25.064 "name": "spare", 00:31:25.064 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 }, 00:31:25.064 { 00:31:25.064 "name": "BaseBdev2", 00:31:25.064 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 }, 00:31:25.064 { 00:31:25.064 "name": "BaseBdev3", 00:31:25.064 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 }, 00:31:25.064 { 00:31:25.064 "name": "BaseBdev4", 00:31:25.064 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:25.064 "is_configured": true, 00:31:25.064 "data_offset": 2048, 00:31:25.064 "data_size": 63488 00:31:25.064 } 00:31:25.064 ] 00:31:25.064 }' 00:31:25.064 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:25.323 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.323 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:25.323 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.323 07:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.257 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:26.257 "name": "raid_bdev1", 00:31:26.257 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:26.257 "strip_size_kb": 64, 00:31:26.257 "state": "online", 00:31:26.257 "raid_level": "raid5f", 00:31:26.257 "superblock": true, 00:31:26.257 "num_base_bdevs": 4, 00:31:26.257 "num_base_bdevs_discovered": 4, 00:31:26.257 "num_base_bdevs_operational": 4, 00:31:26.257 "process": { 00:31:26.257 "type": "rebuild", 00:31:26.257 "target": "spare", 00:31:26.257 "progress": { 00:31:26.257 "blocks": 42240, 00:31:26.257 "percent": 22 00:31:26.257 } 00:31:26.257 }, 00:31:26.257 "base_bdevs_list": [ 00:31:26.257 { 00:31:26.257 "name": "spare", 00:31:26.257 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:26.257 "is_configured": true, 00:31:26.257 "data_offset": 2048, 00:31:26.257 "data_size": 63488 00:31:26.257 }, 00:31:26.257 { 00:31:26.257 "name": "BaseBdev2", 00:31:26.257 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:26.257 "is_configured": true, 00:31:26.257 "data_offset": 2048, 00:31:26.257 "data_size": 63488 00:31:26.257 }, 00:31:26.257 { 00:31:26.257 "name": "BaseBdev3", 00:31:26.257 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:26.257 "is_configured": true, 00:31:26.257 "data_offset": 2048, 00:31:26.257 "data_size": 63488 00:31:26.257 }, 00:31:26.257 { 00:31:26.257 "name": "BaseBdev4", 00:31:26.257 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:26.257 "is_configured": true, 00:31:26.258 "data_offset": 2048, 00:31:26.258 "data_size": 63488 00:31:26.258 } 00:31:26.258 ] 00:31:26.258 }' 00:31:26.258 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:26.258 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:26.258 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:26.565 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:26.565 07:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:27.506 "name": "raid_bdev1", 00:31:27.506 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:27.506 "strip_size_kb": 64, 00:31:27.506 "state": "online", 00:31:27.506 "raid_level": "raid5f", 00:31:27.506 "superblock": true, 00:31:27.506 "num_base_bdevs": 4, 00:31:27.506 "num_base_bdevs_discovered": 4, 00:31:27.506 "num_base_bdevs_operational": 4, 00:31:27.506 "process": { 00:31:27.506 "type": "rebuild", 00:31:27.506 "target": "spare", 00:31:27.506 "progress": { 00:31:27.506 "blocks": 65280, 00:31:27.506 "percent": 34 00:31:27.506 } 00:31:27.506 }, 00:31:27.506 "base_bdevs_list": [ 00:31:27.506 { 00:31:27.506 "name": "spare", 00:31:27.506 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:27.506 "is_configured": true, 00:31:27.506 "data_offset": 2048, 00:31:27.506 "data_size": 63488 00:31:27.506 }, 00:31:27.506 { 00:31:27.506 "name": "BaseBdev2", 00:31:27.506 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:27.506 "is_configured": true, 00:31:27.506 "data_offset": 2048, 00:31:27.506 "data_size": 63488 00:31:27.506 }, 00:31:27.506 { 00:31:27.506 "name": "BaseBdev3", 00:31:27.506 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:27.506 "is_configured": true, 00:31:27.506 "data_offset": 2048, 00:31:27.506 "data_size": 63488 00:31:27.506 }, 00:31:27.506 { 00:31:27.506 "name": "BaseBdev4", 00:31:27.506 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:27.506 "is_configured": true, 00:31:27.506 "data_offset": 2048, 00:31:27.506 "data_size": 63488 00:31:27.506 } 00:31:27.506 ] 00:31:27.506 }' 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:27.506 07:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:28.883 "name": "raid_bdev1", 00:31:28.883 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:28.883 "strip_size_kb": 64, 00:31:28.883 "state": "online", 00:31:28.883 "raid_level": "raid5f", 00:31:28.883 "superblock": true, 00:31:28.883 "num_base_bdevs": 4, 00:31:28.883 "num_base_bdevs_discovered": 4, 00:31:28.883 "num_base_bdevs_operational": 4, 00:31:28.883 "process": { 00:31:28.883 "type": "rebuild", 00:31:28.883 "target": "spare", 00:31:28.883 "progress": { 00:31:28.883 "blocks": 88320, 00:31:28.883 "percent": 46 00:31:28.883 } 00:31:28.883 }, 00:31:28.883 "base_bdevs_list": [ 00:31:28.883 { 00:31:28.883 "name": "spare", 00:31:28.883 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:28.883 "is_configured": true, 00:31:28.883 "data_offset": 2048, 00:31:28.883 "data_size": 63488 00:31:28.883 }, 00:31:28.883 { 00:31:28.883 "name": "BaseBdev2", 00:31:28.883 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:28.883 "is_configured": true, 00:31:28.883 "data_offset": 2048, 00:31:28.883 "data_size": 63488 00:31:28.883 }, 00:31:28.883 { 00:31:28.883 "name": "BaseBdev3", 00:31:28.883 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:28.883 "is_configured": true, 00:31:28.883 "data_offset": 2048, 00:31:28.883 "data_size": 63488 00:31:28.883 }, 00:31:28.883 { 00:31:28.883 "name": "BaseBdev4", 00:31:28.883 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:28.883 "is_configured": true, 00:31:28.883 "data_offset": 2048, 00:31:28.883 "data_size": 63488 00:31:28.883 } 00:31:28.883 ] 00:31:28.883 }' 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:28.883 07:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:29.819 "name": "raid_bdev1", 00:31:29.819 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:29.819 "strip_size_kb": 64, 00:31:29.819 "state": "online", 00:31:29.819 "raid_level": "raid5f", 00:31:29.819 "superblock": true, 00:31:29.819 "num_base_bdevs": 4, 00:31:29.819 "num_base_bdevs_discovered": 4, 00:31:29.819 "num_base_bdevs_operational": 4, 00:31:29.819 "process": { 00:31:29.819 "type": "rebuild", 00:31:29.819 "target": "spare", 00:31:29.819 "progress": { 00:31:29.819 "blocks": 109440, 00:31:29.819 "percent": 57 00:31:29.819 } 00:31:29.819 }, 00:31:29.819 "base_bdevs_list": [ 00:31:29.819 { 00:31:29.819 "name": "spare", 00:31:29.819 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:29.819 "is_configured": true, 00:31:29.819 "data_offset": 2048, 00:31:29.819 "data_size": 63488 00:31:29.819 }, 00:31:29.819 { 00:31:29.819 "name": "BaseBdev2", 00:31:29.819 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:29.819 "is_configured": true, 00:31:29.819 "data_offset": 2048, 00:31:29.819 "data_size": 63488 00:31:29.819 }, 00:31:29.819 { 00:31:29.819 "name": "BaseBdev3", 00:31:29.819 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:29.819 "is_configured": true, 00:31:29.819 "data_offset": 2048, 00:31:29.819 "data_size": 63488 00:31:29.819 }, 00:31:29.819 { 00:31:29.819 "name": "BaseBdev4", 00:31:29.819 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:29.819 "is_configured": true, 00:31:29.819 "data_offset": 2048, 00:31:29.819 "data_size": 63488 00:31:29.819 } 00:31:29.819 ] 00:31:29.819 }' 00:31:29.819 07:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:29.819 07:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:29.819 07:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:29.819 07:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:29.819 07:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:31.194 "name": "raid_bdev1", 00:31:31.194 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:31.194 "strip_size_kb": 64, 00:31:31.194 "state": "online", 00:31:31.194 "raid_level": "raid5f", 00:31:31.194 "superblock": true, 00:31:31.194 "num_base_bdevs": 4, 00:31:31.194 "num_base_bdevs_discovered": 4, 00:31:31.194 "num_base_bdevs_operational": 4, 00:31:31.194 "process": { 00:31:31.194 "type": "rebuild", 00:31:31.194 "target": "spare", 00:31:31.194 "progress": { 00:31:31.194 "blocks": 132480, 00:31:31.194 "percent": 69 00:31:31.194 } 00:31:31.194 }, 00:31:31.194 "base_bdevs_list": [ 00:31:31.194 { 00:31:31.194 "name": "spare", 00:31:31.194 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:31.194 "is_configured": true, 00:31:31.194 "data_offset": 2048, 00:31:31.194 "data_size": 63488 00:31:31.194 }, 00:31:31.194 { 00:31:31.194 "name": "BaseBdev2", 00:31:31.194 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:31.194 "is_configured": true, 00:31:31.194 "data_offset": 2048, 00:31:31.194 "data_size": 63488 00:31:31.194 }, 00:31:31.194 { 00:31:31.194 "name": "BaseBdev3", 00:31:31.194 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:31.194 "is_configured": true, 00:31:31.194 "data_offset": 2048, 00:31:31.194 "data_size": 63488 00:31:31.194 }, 00:31:31.194 { 00:31:31.194 "name": "BaseBdev4", 00:31:31.194 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:31.194 "is_configured": true, 00:31:31.194 "data_offset": 2048, 00:31:31.194 "data_size": 63488 00:31:31.194 } 00:31:31.194 ] 00:31:31.194 }' 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:31.194 07:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:32.132 "name": "raid_bdev1", 00:31:32.132 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:32.132 "strip_size_kb": 64, 00:31:32.132 "state": "online", 00:31:32.132 "raid_level": "raid5f", 00:31:32.132 "superblock": true, 00:31:32.132 "num_base_bdevs": 4, 00:31:32.132 "num_base_bdevs_discovered": 4, 00:31:32.132 "num_base_bdevs_operational": 4, 00:31:32.132 "process": { 00:31:32.132 "type": "rebuild", 00:31:32.132 "target": "spare", 00:31:32.132 "progress": { 00:31:32.132 "blocks": 153600, 00:31:32.132 "percent": 80 00:31:32.132 } 00:31:32.132 }, 00:31:32.132 "base_bdevs_list": [ 00:31:32.132 { 00:31:32.132 "name": "spare", 00:31:32.132 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:32.132 "is_configured": true, 00:31:32.132 "data_offset": 2048, 00:31:32.132 "data_size": 63488 00:31:32.132 }, 00:31:32.132 { 00:31:32.132 "name": "BaseBdev2", 00:31:32.132 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:32.132 "is_configured": true, 00:31:32.132 "data_offset": 2048, 00:31:32.132 "data_size": 63488 00:31:32.132 }, 00:31:32.132 { 00:31:32.132 "name": "BaseBdev3", 00:31:32.132 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:32.132 "is_configured": true, 00:31:32.132 "data_offset": 2048, 00:31:32.132 "data_size": 63488 00:31:32.132 }, 00:31:32.132 { 00:31:32.132 "name": "BaseBdev4", 00:31:32.132 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:32.132 "is_configured": true, 00:31:32.132 "data_offset": 2048, 00:31:32.132 "data_size": 63488 00:31:32.132 } 00:31:32.132 ] 00:31:32.132 }' 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:32.132 07:28:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.510 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:33.510 "name": "raid_bdev1", 00:31:33.510 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:33.510 "strip_size_kb": 64, 00:31:33.510 "state": "online", 00:31:33.510 "raid_level": "raid5f", 00:31:33.510 "superblock": true, 00:31:33.510 "num_base_bdevs": 4, 00:31:33.510 "num_base_bdevs_discovered": 4, 00:31:33.510 "num_base_bdevs_operational": 4, 00:31:33.510 "process": { 00:31:33.510 "type": "rebuild", 00:31:33.510 "target": "spare", 00:31:33.510 "progress": { 00:31:33.510 "blocks": 176640, 00:31:33.510 "percent": 92 00:31:33.510 } 00:31:33.510 }, 00:31:33.510 "base_bdevs_list": [ 00:31:33.510 { 00:31:33.510 "name": "spare", 00:31:33.510 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:33.510 "is_configured": true, 00:31:33.510 "data_offset": 2048, 00:31:33.510 "data_size": 63488 00:31:33.510 }, 00:31:33.510 { 00:31:33.510 "name": "BaseBdev2", 00:31:33.510 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:33.510 "is_configured": true, 00:31:33.510 "data_offset": 2048, 00:31:33.510 "data_size": 63488 00:31:33.510 }, 00:31:33.510 { 00:31:33.510 "name": "BaseBdev3", 00:31:33.510 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:33.510 "is_configured": true, 00:31:33.510 "data_offset": 2048, 00:31:33.510 "data_size": 63488 00:31:33.511 }, 00:31:33.511 { 00:31:33.511 "name": "BaseBdev4", 00:31:33.511 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:33.511 "is_configured": true, 00:31:33.511 "data_offset": 2048, 00:31:33.511 "data_size": 63488 00:31:33.511 } 00:31:33.511 ] 00:31:33.511 }' 00:31:33.511 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:33.511 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:33.511 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:33.511 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:33.511 07:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:34.077 [2024-11-20 07:28:58.186809] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:34.077 [2024-11-20 07:28:58.186944] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:34.077 [2024-11-20 07:28:58.187120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.336 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:34.595 "name": "raid_bdev1", 00:31:34.595 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:34.595 "strip_size_kb": 64, 00:31:34.595 "state": "online", 00:31:34.595 "raid_level": "raid5f", 00:31:34.595 "superblock": true, 00:31:34.595 "num_base_bdevs": 4, 00:31:34.595 "num_base_bdevs_discovered": 4, 00:31:34.595 "num_base_bdevs_operational": 4, 00:31:34.595 "base_bdevs_list": [ 00:31:34.595 { 00:31:34.595 "name": "spare", 00:31:34.595 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 }, 00:31:34.595 { 00:31:34.595 "name": "BaseBdev2", 00:31:34.595 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 }, 00:31:34.595 { 00:31:34.595 "name": "BaseBdev3", 00:31:34.595 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 }, 00:31:34.595 { 00:31:34.595 "name": "BaseBdev4", 00:31:34.595 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 } 00:31:34.595 ] 00:31:34.595 }' 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:34.595 "name": "raid_bdev1", 00:31:34.595 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:34.595 "strip_size_kb": 64, 00:31:34.595 "state": "online", 00:31:34.595 "raid_level": "raid5f", 00:31:34.595 "superblock": true, 00:31:34.595 "num_base_bdevs": 4, 00:31:34.595 "num_base_bdevs_discovered": 4, 00:31:34.595 "num_base_bdevs_operational": 4, 00:31:34.595 "base_bdevs_list": [ 00:31:34.595 { 00:31:34.595 "name": "spare", 00:31:34.595 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 }, 00:31:34.595 { 00:31:34.595 "name": "BaseBdev2", 00:31:34.595 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 }, 00:31:34.595 { 00:31:34.595 "name": "BaseBdev3", 00:31:34.595 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 }, 00:31:34.595 { 00:31:34.595 "name": "BaseBdev4", 00:31:34.595 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:34.595 "is_configured": true, 00:31:34.595 "data_offset": 2048, 00:31:34.595 "data_size": 63488 00:31:34.595 } 00:31:34.595 ] 00:31:34.595 }' 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:34.595 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.854 "name": "raid_bdev1", 00:31:34.854 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:34.854 "strip_size_kb": 64, 00:31:34.854 "state": "online", 00:31:34.854 "raid_level": "raid5f", 00:31:34.854 "superblock": true, 00:31:34.854 "num_base_bdevs": 4, 00:31:34.854 "num_base_bdevs_discovered": 4, 00:31:34.854 "num_base_bdevs_operational": 4, 00:31:34.854 "base_bdevs_list": [ 00:31:34.854 { 00:31:34.854 "name": "spare", 00:31:34.854 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:34.854 "is_configured": true, 00:31:34.854 "data_offset": 2048, 00:31:34.854 "data_size": 63488 00:31:34.854 }, 00:31:34.854 { 00:31:34.854 "name": "BaseBdev2", 00:31:34.854 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:34.854 "is_configured": true, 00:31:34.854 "data_offset": 2048, 00:31:34.854 "data_size": 63488 00:31:34.854 }, 00:31:34.854 { 00:31:34.854 "name": "BaseBdev3", 00:31:34.854 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:34.854 "is_configured": true, 00:31:34.854 "data_offset": 2048, 00:31:34.854 "data_size": 63488 00:31:34.854 }, 00:31:34.854 { 00:31:34.854 "name": "BaseBdev4", 00:31:34.854 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:34.854 "is_configured": true, 00:31:34.854 "data_offset": 2048, 00:31:34.854 "data_size": 63488 00:31:34.854 } 00:31:34.854 ] 00:31:34.854 }' 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.854 07:28:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.112 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:35.112 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.112 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.371 [2024-11-20 07:28:59.402846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:35.371 [2024-11-20 07:28:59.402886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:35.371 [2024-11-20 07:28:59.403019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:35.371 [2024-11-20 07:28:59.403164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:35.371 [2024-11-20 07:28:59.403197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:35.371 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:35.631 /dev/nbd0 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:35.631 1+0 records in 00:31:35.631 1+0 records out 00:31:35.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466383 s, 8.8 MB/s 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:35.631 07:28:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:35.890 /dev/nbd1 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:35.890 1+0 records in 00:31:35.890 1+0 records out 00:31:35.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030098 s, 13.6 MB/s 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:35.890 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:36.149 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:36.408 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.975 07:29:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.975 [2024-11-20 07:29:01.006785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:36.975 [2024-11-20 07:29:01.006855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:36.975 [2024-11-20 07:29:01.006891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:31:36.975 [2024-11-20 07:29:01.006907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:36.975 [2024-11-20 07:29:01.010244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:36.975 [2024-11-20 07:29:01.010437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:36.975 [2024-11-20 07:29:01.010720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:36.975 [2024-11-20 07:29:01.010912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:36.975 [2024-11-20 07:29:01.011262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:36.975 [2024-11-20 07:29:01.011660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:36.975 spare 00:31:36.975 [2024-11-20 07:29:01.011913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.975 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.975 [2024-11-20 07:29:01.112060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:36.975 [2024-11-20 07:29:01.112093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:36.975 [2024-11-20 07:29:01.112400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:31:36.975 [2024-11-20 07:29:01.118273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:36.976 [2024-11-20 07:29:01.118297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:36.976 [2024-11-20 07:29:01.118486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:36.976 "name": "raid_bdev1", 00:31:36.976 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:36.976 "strip_size_kb": 64, 00:31:36.976 "state": "online", 00:31:36.976 "raid_level": "raid5f", 00:31:36.976 "superblock": true, 00:31:36.976 "num_base_bdevs": 4, 00:31:36.976 "num_base_bdevs_discovered": 4, 00:31:36.976 "num_base_bdevs_operational": 4, 00:31:36.976 "base_bdevs_list": [ 00:31:36.976 { 00:31:36.976 "name": "spare", 00:31:36.976 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:36.976 "is_configured": true, 00:31:36.976 "data_offset": 2048, 00:31:36.976 "data_size": 63488 00:31:36.976 }, 00:31:36.976 { 00:31:36.976 "name": "BaseBdev2", 00:31:36.976 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:36.976 "is_configured": true, 00:31:36.976 "data_offset": 2048, 00:31:36.976 "data_size": 63488 00:31:36.976 }, 00:31:36.976 { 00:31:36.976 "name": "BaseBdev3", 00:31:36.976 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:36.976 "is_configured": true, 00:31:36.976 "data_offset": 2048, 00:31:36.976 "data_size": 63488 00:31:36.976 }, 00:31:36.976 { 00:31:36.976 "name": "BaseBdev4", 00:31:36.976 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:36.976 "is_configured": true, 00:31:36.976 "data_offset": 2048, 00:31:36.976 "data_size": 63488 00:31:36.976 } 00:31:36.976 ] 00:31:36.976 }' 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:36.976 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:37.543 "name": "raid_bdev1", 00:31:37.543 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:37.543 "strip_size_kb": 64, 00:31:37.543 "state": "online", 00:31:37.543 "raid_level": "raid5f", 00:31:37.543 "superblock": true, 00:31:37.543 "num_base_bdevs": 4, 00:31:37.543 "num_base_bdevs_discovered": 4, 00:31:37.543 "num_base_bdevs_operational": 4, 00:31:37.543 "base_bdevs_list": [ 00:31:37.543 { 00:31:37.543 "name": "spare", 00:31:37.543 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:37.543 "is_configured": true, 00:31:37.543 "data_offset": 2048, 00:31:37.543 "data_size": 63488 00:31:37.543 }, 00:31:37.543 { 00:31:37.543 "name": "BaseBdev2", 00:31:37.543 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:37.543 "is_configured": true, 00:31:37.543 "data_offset": 2048, 00:31:37.543 "data_size": 63488 00:31:37.543 }, 00:31:37.543 { 00:31:37.543 "name": "BaseBdev3", 00:31:37.543 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:37.543 "is_configured": true, 00:31:37.543 "data_offset": 2048, 00:31:37.543 "data_size": 63488 00:31:37.543 }, 00:31:37.543 { 00:31:37.543 "name": "BaseBdev4", 00:31:37.543 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:37.543 "is_configured": true, 00:31:37.543 "data_offset": 2048, 00:31:37.543 "data_size": 63488 00:31:37.543 } 00:31:37.543 ] 00:31:37.543 }' 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.543 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.543 [2024-11-20 07:29:01.825629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:37.803 "name": "raid_bdev1", 00:31:37.803 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:37.803 "strip_size_kb": 64, 00:31:37.803 "state": "online", 00:31:37.803 "raid_level": "raid5f", 00:31:37.803 "superblock": true, 00:31:37.803 "num_base_bdevs": 4, 00:31:37.803 "num_base_bdevs_discovered": 3, 00:31:37.803 "num_base_bdevs_operational": 3, 00:31:37.803 "base_bdevs_list": [ 00:31:37.803 { 00:31:37.803 "name": null, 00:31:37.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.803 "is_configured": false, 00:31:37.803 "data_offset": 0, 00:31:37.803 "data_size": 63488 00:31:37.803 }, 00:31:37.803 { 00:31:37.803 "name": "BaseBdev2", 00:31:37.803 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:37.803 "is_configured": true, 00:31:37.803 "data_offset": 2048, 00:31:37.803 "data_size": 63488 00:31:37.803 }, 00:31:37.803 { 00:31:37.803 "name": "BaseBdev3", 00:31:37.803 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:37.803 "is_configured": true, 00:31:37.803 "data_offset": 2048, 00:31:37.803 "data_size": 63488 00:31:37.803 }, 00:31:37.803 { 00:31:37.803 "name": "BaseBdev4", 00:31:37.803 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:37.803 "is_configured": true, 00:31:37.803 "data_offset": 2048, 00:31:37.803 "data_size": 63488 00:31:37.803 } 00:31:37.803 ] 00:31:37.803 }' 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:37.803 07:29:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.062 07:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:38.062 07:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.062 07:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.062 [2024-11-20 07:29:02.325841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:38.062 [2024-11-20 07:29:02.326110] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:38.062 [2024-11-20 07:29:02.326137] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:38.062 [2024-11-20 07:29:02.326196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:38.062 [2024-11-20 07:29:02.339839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:31:38.062 07:29:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.062 07:29:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:31:38.062 [2024-11-20 07:29:02.348553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:39.439 "name": "raid_bdev1", 00:31:39.439 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:39.439 "strip_size_kb": 64, 00:31:39.439 "state": "online", 00:31:39.439 "raid_level": "raid5f", 00:31:39.439 "superblock": true, 00:31:39.439 "num_base_bdevs": 4, 00:31:39.439 "num_base_bdevs_discovered": 4, 00:31:39.439 "num_base_bdevs_operational": 4, 00:31:39.439 "process": { 00:31:39.439 "type": "rebuild", 00:31:39.439 "target": "spare", 00:31:39.439 "progress": { 00:31:39.439 "blocks": 17280, 00:31:39.439 "percent": 9 00:31:39.439 } 00:31:39.439 }, 00:31:39.439 "base_bdevs_list": [ 00:31:39.439 { 00:31:39.439 "name": "spare", 00:31:39.439 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 }, 00:31:39.439 { 00:31:39.439 "name": "BaseBdev2", 00:31:39.439 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 }, 00:31:39.439 { 00:31:39.439 "name": "BaseBdev3", 00:31:39.439 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 }, 00:31:39.439 { 00:31:39.439 "name": "BaseBdev4", 00:31:39.439 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 } 00:31:39.439 ] 00:31:39.439 }' 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.439 [2024-11-20 07:29:03.505799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:39.439 [2024-11-20 07:29:03.559690] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:39.439 [2024-11-20 07:29:03.559802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.439 [2024-11-20 07:29:03.559828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:39.439 [2024-11-20 07:29:03.559845] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.439 "name": "raid_bdev1", 00:31:39.439 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:39.439 "strip_size_kb": 64, 00:31:39.439 "state": "online", 00:31:39.439 "raid_level": "raid5f", 00:31:39.439 "superblock": true, 00:31:39.439 "num_base_bdevs": 4, 00:31:39.439 "num_base_bdevs_discovered": 3, 00:31:39.439 "num_base_bdevs_operational": 3, 00:31:39.439 "base_bdevs_list": [ 00:31:39.439 { 00:31:39.439 "name": null, 00:31:39.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.439 "is_configured": false, 00:31:39.439 "data_offset": 0, 00:31:39.439 "data_size": 63488 00:31:39.439 }, 00:31:39.439 { 00:31:39.439 "name": "BaseBdev2", 00:31:39.439 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 }, 00:31:39.439 { 00:31:39.439 "name": "BaseBdev3", 00:31:39.439 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 }, 00:31:39.439 { 00:31:39.439 "name": "BaseBdev4", 00:31:39.439 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:39.439 "is_configured": true, 00:31:39.439 "data_offset": 2048, 00:31:39.439 "data_size": 63488 00:31:39.439 } 00:31:39.439 ] 00:31:39.439 }' 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.439 07:29:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.007 07:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:40.007 07:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.007 07:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.007 [2024-11-20 07:29:04.130036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:40.007 [2024-11-20 07:29:04.130122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:40.007 [2024-11-20 07:29:04.130159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:31:40.007 [2024-11-20 07:29:04.130178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:40.007 [2024-11-20 07:29:04.130804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:40.007 [2024-11-20 07:29:04.130837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:40.007 [2024-11-20 07:29:04.130952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:40.007 [2024-11-20 07:29:04.130977] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:40.007 [2024-11-20 07:29:04.130992] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:40.007 [2024-11-20 07:29:04.131050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:40.007 [2024-11-20 07:29:04.144362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:31:40.007 spare 00:31:40.007 07:29:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.007 07:29:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:31:40.007 [2024-11-20 07:29:04.153248] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.943 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:40.943 "name": "raid_bdev1", 00:31:40.943 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:40.943 "strip_size_kb": 64, 00:31:40.943 "state": "online", 00:31:40.943 "raid_level": "raid5f", 00:31:40.943 "superblock": true, 00:31:40.943 "num_base_bdevs": 4, 00:31:40.943 "num_base_bdevs_discovered": 4, 00:31:40.943 "num_base_bdevs_operational": 4, 00:31:40.943 "process": { 00:31:40.943 "type": "rebuild", 00:31:40.943 "target": "spare", 00:31:40.943 "progress": { 00:31:40.943 "blocks": 17280, 00:31:40.943 "percent": 9 00:31:40.943 } 00:31:40.943 }, 00:31:40.943 "base_bdevs_list": [ 00:31:40.943 { 00:31:40.943 "name": "spare", 00:31:40.943 "uuid": "f933a5d1-cc12-5546-8074-51da1cb6751f", 00:31:40.943 "is_configured": true, 00:31:40.943 "data_offset": 2048, 00:31:40.943 "data_size": 63488 00:31:40.943 }, 00:31:40.943 { 00:31:40.943 "name": "BaseBdev2", 00:31:40.943 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:40.944 "is_configured": true, 00:31:40.944 "data_offset": 2048, 00:31:40.944 "data_size": 63488 00:31:40.944 }, 00:31:40.944 { 00:31:40.944 "name": "BaseBdev3", 00:31:40.944 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:40.944 "is_configured": true, 00:31:40.944 "data_offset": 2048, 00:31:40.944 "data_size": 63488 00:31:40.944 }, 00:31:40.944 { 00:31:40.944 "name": "BaseBdev4", 00:31:40.944 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:40.944 "is_configured": true, 00:31:40.944 "data_offset": 2048, 00:31:40.944 "data_size": 63488 00:31:40.944 } 00:31:40.944 ] 00:31:40.944 }' 00:31:40.944 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.202 [2024-11-20 07:29:05.315079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:41.202 [2024-11-20 07:29:05.364858] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:41.202 [2024-11-20 07:29:05.364932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.202 [2024-11-20 07:29:05.364962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:41.202 [2024-11-20 07:29:05.364974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.202 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:41.202 "name": "raid_bdev1", 00:31:41.202 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:41.202 "strip_size_kb": 64, 00:31:41.202 "state": "online", 00:31:41.202 "raid_level": "raid5f", 00:31:41.202 "superblock": true, 00:31:41.202 "num_base_bdevs": 4, 00:31:41.202 "num_base_bdevs_discovered": 3, 00:31:41.202 "num_base_bdevs_operational": 3, 00:31:41.202 "base_bdevs_list": [ 00:31:41.202 { 00:31:41.202 "name": null, 00:31:41.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.202 "is_configured": false, 00:31:41.202 "data_offset": 0, 00:31:41.202 "data_size": 63488 00:31:41.202 }, 00:31:41.202 { 00:31:41.202 "name": "BaseBdev2", 00:31:41.202 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:41.202 "is_configured": true, 00:31:41.202 "data_offset": 2048, 00:31:41.202 "data_size": 63488 00:31:41.202 }, 00:31:41.202 { 00:31:41.202 "name": "BaseBdev3", 00:31:41.202 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:41.202 "is_configured": true, 00:31:41.203 "data_offset": 2048, 00:31:41.203 "data_size": 63488 00:31:41.203 }, 00:31:41.203 { 00:31:41.203 "name": "BaseBdev4", 00:31:41.203 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:41.203 "is_configured": true, 00:31:41.203 "data_offset": 2048, 00:31:41.203 "data_size": 63488 00:31:41.203 } 00:31:41.203 ] 00:31:41.203 }' 00:31:41.203 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:41.203 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.770 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.771 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.771 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.771 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.771 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:41.771 "name": "raid_bdev1", 00:31:41.771 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:41.771 "strip_size_kb": 64, 00:31:41.771 "state": "online", 00:31:41.771 "raid_level": "raid5f", 00:31:41.771 "superblock": true, 00:31:41.771 "num_base_bdevs": 4, 00:31:41.771 "num_base_bdevs_discovered": 3, 00:31:41.771 "num_base_bdevs_operational": 3, 00:31:41.771 "base_bdevs_list": [ 00:31:41.771 { 00:31:41.771 "name": null, 00:31:41.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.771 "is_configured": false, 00:31:41.771 "data_offset": 0, 00:31:41.771 "data_size": 63488 00:31:41.771 }, 00:31:41.771 { 00:31:41.771 "name": "BaseBdev2", 00:31:41.771 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:41.771 "is_configured": true, 00:31:41.771 "data_offset": 2048, 00:31:41.771 "data_size": 63488 00:31:41.771 }, 00:31:41.771 { 00:31:41.771 "name": "BaseBdev3", 00:31:41.771 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:41.771 "is_configured": true, 00:31:41.771 "data_offset": 2048, 00:31:41.771 "data_size": 63488 00:31:41.771 }, 00:31:41.771 { 00:31:41.771 "name": "BaseBdev4", 00:31:41.771 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:41.771 "is_configured": true, 00:31:41.771 "data_offset": 2048, 00:31:41.771 "data_size": 63488 00:31:41.771 } 00:31:41.771 ] 00:31:41.771 }' 00:31:41.771 07:29:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:41.771 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:41.771 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.029 [2024-11-20 07:29:06.093310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:42.029 [2024-11-20 07:29:06.093385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:42.029 [2024-11-20 07:29:06.093416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:31:42.029 [2024-11-20 07:29:06.093430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:42.029 [2024-11-20 07:29:06.094233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:42.029 [2024-11-20 07:29:06.094470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:42.029 [2024-11-20 07:29:06.094640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:42.029 [2024-11-20 07:29:06.094665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:42.029 [2024-11-20 07:29:06.094683] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:42.029 [2024-11-20 07:29:06.094697] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:31:42.029 BaseBdev1 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.029 07:29:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.964 "name": "raid_bdev1", 00:31:42.964 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:42.964 "strip_size_kb": 64, 00:31:42.964 "state": "online", 00:31:42.964 "raid_level": "raid5f", 00:31:42.964 "superblock": true, 00:31:42.964 "num_base_bdevs": 4, 00:31:42.964 "num_base_bdevs_discovered": 3, 00:31:42.964 "num_base_bdevs_operational": 3, 00:31:42.964 "base_bdevs_list": [ 00:31:42.964 { 00:31:42.964 "name": null, 00:31:42.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.964 "is_configured": false, 00:31:42.964 "data_offset": 0, 00:31:42.964 "data_size": 63488 00:31:42.964 }, 00:31:42.964 { 00:31:42.964 "name": "BaseBdev2", 00:31:42.964 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:42.964 "is_configured": true, 00:31:42.964 "data_offset": 2048, 00:31:42.964 "data_size": 63488 00:31:42.964 }, 00:31:42.964 { 00:31:42.964 "name": "BaseBdev3", 00:31:42.964 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:42.964 "is_configured": true, 00:31:42.964 "data_offset": 2048, 00:31:42.964 "data_size": 63488 00:31:42.964 }, 00:31:42.964 { 00:31:42.964 "name": "BaseBdev4", 00:31:42.964 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:42.964 "is_configured": true, 00:31:42.964 "data_offset": 2048, 00:31:42.964 "data_size": 63488 00:31:42.964 } 00:31:42.964 ] 00:31:42.964 }' 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.964 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:43.531 "name": "raid_bdev1", 00:31:43.531 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:43.531 "strip_size_kb": 64, 00:31:43.531 "state": "online", 00:31:43.531 "raid_level": "raid5f", 00:31:43.531 "superblock": true, 00:31:43.531 "num_base_bdevs": 4, 00:31:43.531 "num_base_bdevs_discovered": 3, 00:31:43.531 "num_base_bdevs_operational": 3, 00:31:43.531 "base_bdevs_list": [ 00:31:43.531 { 00:31:43.531 "name": null, 00:31:43.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.531 "is_configured": false, 00:31:43.531 "data_offset": 0, 00:31:43.531 "data_size": 63488 00:31:43.531 }, 00:31:43.531 { 00:31:43.531 "name": "BaseBdev2", 00:31:43.531 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:43.531 "is_configured": true, 00:31:43.531 "data_offset": 2048, 00:31:43.531 "data_size": 63488 00:31:43.531 }, 00:31:43.531 { 00:31:43.531 "name": "BaseBdev3", 00:31:43.531 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:43.531 "is_configured": true, 00:31:43.531 "data_offset": 2048, 00:31:43.531 "data_size": 63488 00:31:43.531 }, 00:31:43.531 { 00:31:43.531 "name": "BaseBdev4", 00:31:43.531 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:43.531 "is_configured": true, 00:31:43.531 "data_offset": 2048, 00:31:43.531 "data_size": 63488 00:31:43.531 } 00:31:43.531 ] 00:31:43.531 }' 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.531 [2024-11-20 07:29:07.805994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:43.531 [2024-11-20 07:29:07.806366] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:43.531 [2024-11-20 07:29:07.806400] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:43.531 request: 00:31:43.531 { 00:31:43.531 "base_bdev": "BaseBdev1", 00:31:43.531 "raid_bdev": "raid_bdev1", 00:31:43.531 "method": "bdev_raid_add_base_bdev", 00:31:43.531 "req_id": 1 00:31:43.531 } 00:31:43.531 Got JSON-RPC error response 00:31:43.531 response: 00:31:43.531 { 00:31:43.531 "code": -22, 00:31:43.531 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:43.531 } 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:43.531 07:29:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.906 "name": "raid_bdev1", 00:31:44.906 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:44.906 "strip_size_kb": 64, 00:31:44.906 "state": "online", 00:31:44.906 "raid_level": "raid5f", 00:31:44.906 "superblock": true, 00:31:44.906 "num_base_bdevs": 4, 00:31:44.906 "num_base_bdevs_discovered": 3, 00:31:44.906 "num_base_bdevs_operational": 3, 00:31:44.906 "base_bdevs_list": [ 00:31:44.906 { 00:31:44.906 "name": null, 00:31:44.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.906 "is_configured": false, 00:31:44.906 "data_offset": 0, 00:31:44.906 "data_size": 63488 00:31:44.906 }, 00:31:44.906 { 00:31:44.906 "name": "BaseBdev2", 00:31:44.906 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:44.906 "is_configured": true, 00:31:44.906 "data_offset": 2048, 00:31:44.906 "data_size": 63488 00:31:44.906 }, 00:31:44.906 { 00:31:44.906 "name": "BaseBdev3", 00:31:44.906 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:44.906 "is_configured": true, 00:31:44.906 "data_offset": 2048, 00:31:44.906 "data_size": 63488 00:31:44.906 }, 00:31:44.906 { 00:31:44.906 "name": "BaseBdev4", 00:31:44.906 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:44.906 "is_configured": true, 00:31:44.906 "data_offset": 2048, 00:31:44.906 "data_size": 63488 00:31:44.906 } 00:31:44.906 ] 00:31:44.906 }' 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.906 07:29:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:45.164 "name": "raid_bdev1", 00:31:45.164 "uuid": "aa0e0fb5-bf47-40be-8dae-92f985679c86", 00:31:45.164 "strip_size_kb": 64, 00:31:45.164 "state": "online", 00:31:45.164 "raid_level": "raid5f", 00:31:45.164 "superblock": true, 00:31:45.164 "num_base_bdevs": 4, 00:31:45.164 "num_base_bdevs_discovered": 3, 00:31:45.164 "num_base_bdevs_operational": 3, 00:31:45.164 "base_bdevs_list": [ 00:31:45.164 { 00:31:45.164 "name": null, 00:31:45.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.164 "is_configured": false, 00:31:45.164 "data_offset": 0, 00:31:45.164 "data_size": 63488 00:31:45.164 }, 00:31:45.164 { 00:31:45.164 "name": "BaseBdev2", 00:31:45.164 "uuid": "c8d6c816-30fa-552f-b7e4-2bbd92b83ae0", 00:31:45.164 "is_configured": true, 00:31:45.164 "data_offset": 2048, 00:31:45.164 "data_size": 63488 00:31:45.164 }, 00:31:45.164 { 00:31:45.164 "name": "BaseBdev3", 00:31:45.164 "uuid": "aa630268-46cf-5046-b8fb-851144288b31", 00:31:45.164 "is_configured": true, 00:31:45.164 "data_offset": 2048, 00:31:45.164 "data_size": 63488 00:31:45.164 }, 00:31:45.164 { 00:31:45.164 "name": "BaseBdev4", 00:31:45.164 "uuid": "cbd7ab15-9da0-5997-80a2-7e268a924e50", 00:31:45.164 "is_configured": true, 00:31:45.164 "data_offset": 2048, 00:31:45.164 "data_size": 63488 00:31:45.164 } 00:31:45.164 ] 00:31:45.164 }' 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:45.164 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85707 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85707 ']' 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85707 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85707 00:31:45.424 killing process with pid 85707 00:31:45.424 Received shutdown signal, test time was about 60.000000 seconds 00:31:45.424 00:31:45.424 Latency(us) 00:31:45.424 [2024-11-20T07:29:09.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.424 [2024-11-20T07:29:09.713Z] =================================================================================================================== 00:31:45.424 [2024-11-20T07:29:09.713Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85707' 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85707 00:31:45.424 [2024-11-20 07:29:09.531459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:45.424 07:29:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85707 00:31:45.424 [2024-11-20 07:29:09.531651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.424 [2024-11-20 07:29:09.531745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:45.424 [2024-11-20 07:29:09.531766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:45.683 [2024-11-20 07:29:09.934992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:47.059 ************************************ 00:31:47.059 END TEST raid5f_rebuild_test_sb 00:31:47.059 ************************************ 00:31:47.059 07:29:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:31:47.059 00:31:47.059 real 0m28.396s 00:31:47.059 user 0m36.940s 00:31:47.059 sys 0m2.792s 00:31:47.059 07:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.059 07:29:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.059 07:29:11 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:31:47.059 07:29:11 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:31:47.059 07:29:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:47.059 07:29:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.059 07:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:47.059 ************************************ 00:31:47.059 START TEST raid_state_function_test_sb_4k 00:31:47.059 ************************************ 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86528 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86528' 00:31:47.059 Process raid pid: 86528 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86528 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86528 ']' 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.059 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.060 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.060 07:29:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:47.060 [2024-11-20 07:29:11.142105] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:47.060 [2024-11-20 07:29:11.142544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.060 [2024-11-20 07:29:11.322533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.318 [2024-11-20 07:29:11.482526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.577 [2024-11-20 07:29:11.736144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:47.577 [2024-11-20 07:29:11.736226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.144 [2024-11-20 07:29:12.206303] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:48.144 [2024-11-20 07:29:12.206511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:48.144 [2024-11-20 07:29:12.206538] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:48.144 [2024-11-20 07:29:12.206555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.144 "name": "Existed_Raid", 00:31:48.144 "uuid": "e00805bb-c04a-41e2-8965-ca86991a8c15", 00:31:48.144 "strip_size_kb": 0, 00:31:48.144 "state": "configuring", 00:31:48.144 "raid_level": "raid1", 00:31:48.144 "superblock": true, 00:31:48.144 "num_base_bdevs": 2, 00:31:48.144 "num_base_bdevs_discovered": 0, 00:31:48.144 "num_base_bdevs_operational": 2, 00:31:48.144 "base_bdevs_list": [ 00:31:48.144 { 00:31:48.144 "name": "BaseBdev1", 00:31:48.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.144 "is_configured": false, 00:31:48.144 "data_offset": 0, 00:31:48.144 "data_size": 0 00:31:48.144 }, 00:31:48.144 { 00:31:48.144 "name": "BaseBdev2", 00:31:48.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.144 "is_configured": false, 00:31:48.144 "data_offset": 0, 00:31:48.144 "data_size": 0 00:31:48.144 } 00:31:48.144 ] 00:31:48.144 }' 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.144 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.813 [2024-11-20 07:29:12.722393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:48.813 [2024-11-20 07:29:12.722432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.813 [2024-11-20 07:29:12.734377] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:48.813 [2024-11-20 07:29:12.734613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:48.813 [2024-11-20 07:29:12.734747] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:48.813 [2024-11-20 07:29:12.734817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.813 [2024-11-20 07:29:12.781352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:48.813 BaseBdev1 00:31:48.813 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.814 [ 00:31:48.814 { 00:31:48.814 "name": "BaseBdev1", 00:31:48.814 "aliases": [ 00:31:48.814 "c18f3e3b-8def-4525-aa03-168263ef9c9b" 00:31:48.814 ], 00:31:48.814 "product_name": "Malloc disk", 00:31:48.814 "block_size": 4096, 00:31:48.814 "num_blocks": 8192, 00:31:48.814 "uuid": "c18f3e3b-8def-4525-aa03-168263ef9c9b", 00:31:48.814 "assigned_rate_limits": { 00:31:48.814 "rw_ios_per_sec": 0, 00:31:48.814 "rw_mbytes_per_sec": 0, 00:31:48.814 "r_mbytes_per_sec": 0, 00:31:48.814 "w_mbytes_per_sec": 0 00:31:48.814 }, 00:31:48.814 "claimed": true, 00:31:48.814 "claim_type": "exclusive_write", 00:31:48.814 "zoned": false, 00:31:48.814 "supported_io_types": { 00:31:48.814 "read": true, 00:31:48.814 "write": true, 00:31:48.814 "unmap": true, 00:31:48.814 "flush": true, 00:31:48.814 "reset": true, 00:31:48.814 "nvme_admin": false, 00:31:48.814 "nvme_io": false, 00:31:48.814 "nvme_io_md": false, 00:31:48.814 "write_zeroes": true, 00:31:48.814 "zcopy": true, 00:31:48.814 "get_zone_info": false, 00:31:48.814 "zone_management": false, 00:31:48.814 "zone_append": false, 00:31:48.814 "compare": false, 00:31:48.814 "compare_and_write": false, 00:31:48.814 "abort": true, 00:31:48.814 "seek_hole": false, 00:31:48.814 "seek_data": false, 00:31:48.814 "copy": true, 00:31:48.814 "nvme_iov_md": false 00:31:48.814 }, 00:31:48.814 "memory_domains": [ 00:31:48.814 { 00:31:48.814 "dma_device_id": "system", 00:31:48.814 "dma_device_type": 1 00:31:48.814 }, 00:31:48.814 { 00:31:48.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:48.814 "dma_device_type": 2 00:31:48.814 } 00:31:48.814 ], 00:31:48.814 "driver_specific": {} 00:31:48.814 } 00:31:48.814 ] 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.814 "name": "Existed_Raid", 00:31:48.814 "uuid": "53fd18d7-e39c-480c-8c28-b78f2519602e", 00:31:48.814 "strip_size_kb": 0, 00:31:48.814 "state": "configuring", 00:31:48.814 "raid_level": "raid1", 00:31:48.814 "superblock": true, 00:31:48.814 "num_base_bdevs": 2, 00:31:48.814 "num_base_bdevs_discovered": 1, 00:31:48.814 "num_base_bdevs_operational": 2, 00:31:48.814 "base_bdevs_list": [ 00:31:48.814 { 00:31:48.814 "name": "BaseBdev1", 00:31:48.814 "uuid": "c18f3e3b-8def-4525-aa03-168263ef9c9b", 00:31:48.814 "is_configured": true, 00:31:48.814 "data_offset": 256, 00:31:48.814 "data_size": 7936 00:31:48.814 }, 00:31:48.814 { 00:31:48.814 "name": "BaseBdev2", 00:31:48.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.814 "is_configured": false, 00:31:48.814 "data_offset": 0, 00:31:48.814 "data_size": 0 00:31:48.814 } 00:31:48.814 ] 00:31:48.814 }' 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.814 07:29:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.094 [2024-11-20 07:29:13.365584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:49.094 [2024-11-20 07:29:13.365835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.094 [2024-11-20 07:29:13.377644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:49.094 [2024-11-20 07:29:13.380379] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:49.094 [2024-11-20 07:29:13.380560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.094 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.353 "name": "Existed_Raid", 00:31:49.353 "uuid": "857a75e0-c9aa-40f4-be73-32d6bf03973d", 00:31:49.353 "strip_size_kb": 0, 00:31:49.353 "state": "configuring", 00:31:49.353 "raid_level": "raid1", 00:31:49.353 "superblock": true, 00:31:49.353 "num_base_bdevs": 2, 00:31:49.353 "num_base_bdevs_discovered": 1, 00:31:49.353 "num_base_bdevs_operational": 2, 00:31:49.353 "base_bdevs_list": [ 00:31:49.353 { 00:31:49.353 "name": "BaseBdev1", 00:31:49.353 "uuid": "c18f3e3b-8def-4525-aa03-168263ef9c9b", 00:31:49.353 "is_configured": true, 00:31:49.353 "data_offset": 256, 00:31:49.353 "data_size": 7936 00:31:49.353 }, 00:31:49.353 { 00:31:49.353 "name": "BaseBdev2", 00:31:49.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.353 "is_configured": false, 00:31:49.353 "data_offset": 0, 00:31:49.353 "data_size": 0 00:31:49.353 } 00:31:49.353 ] 00:31:49.353 }' 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.353 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.921 [2024-11-20 07:29:13.962381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:49.921 [2024-11-20 07:29:13.962708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:49.921 [2024-11-20 07:29:13.962743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:49.921 BaseBdev2 00:31:49.921 [2024-11-20 07:29:13.963114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:49.921 [2024-11-20 07:29:13.963332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:49.921 [2024-11-20 07:29:13.963361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:49.921 [2024-11-20 07:29:13.963576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.921 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.921 [ 00:31:49.921 { 00:31:49.921 "name": "BaseBdev2", 00:31:49.921 "aliases": [ 00:31:49.921 "3efc5a9d-0b9b-413a-988e-8809803f1d89" 00:31:49.921 ], 00:31:49.921 "product_name": "Malloc disk", 00:31:49.921 "block_size": 4096, 00:31:49.921 "num_blocks": 8192, 00:31:49.921 "uuid": "3efc5a9d-0b9b-413a-988e-8809803f1d89", 00:31:49.921 "assigned_rate_limits": { 00:31:49.921 "rw_ios_per_sec": 0, 00:31:49.921 "rw_mbytes_per_sec": 0, 00:31:49.921 "r_mbytes_per_sec": 0, 00:31:49.921 "w_mbytes_per_sec": 0 00:31:49.921 }, 00:31:49.921 "claimed": true, 00:31:49.921 "claim_type": "exclusive_write", 00:31:49.921 "zoned": false, 00:31:49.921 "supported_io_types": { 00:31:49.921 "read": true, 00:31:49.921 "write": true, 00:31:49.921 "unmap": true, 00:31:49.921 "flush": true, 00:31:49.921 "reset": true, 00:31:49.921 "nvme_admin": false, 00:31:49.921 "nvme_io": false, 00:31:49.921 "nvme_io_md": false, 00:31:49.921 "write_zeroes": true, 00:31:49.921 "zcopy": true, 00:31:49.921 "get_zone_info": false, 00:31:49.921 "zone_management": false, 00:31:49.921 "zone_append": false, 00:31:49.921 "compare": false, 00:31:49.921 "compare_and_write": false, 00:31:49.921 "abort": true, 00:31:49.921 "seek_hole": false, 00:31:49.921 "seek_data": false, 00:31:49.921 "copy": true, 00:31:49.921 "nvme_iov_md": false 00:31:49.921 }, 00:31:49.921 "memory_domains": [ 00:31:49.921 { 00:31:49.922 "dma_device_id": "system", 00:31:49.922 "dma_device_type": 1 00:31:49.922 }, 00:31:49.922 { 00:31:49.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:49.922 "dma_device_type": 2 00:31:49.922 } 00:31:49.922 ], 00:31:49.922 "driver_specific": {} 00:31:49.922 } 00:31:49.922 ] 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.922 07:29:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.922 "name": "Existed_Raid", 00:31:49.922 "uuid": "857a75e0-c9aa-40f4-be73-32d6bf03973d", 00:31:49.922 "strip_size_kb": 0, 00:31:49.922 "state": "online", 00:31:49.922 "raid_level": "raid1", 00:31:49.922 "superblock": true, 00:31:49.922 "num_base_bdevs": 2, 00:31:49.922 "num_base_bdevs_discovered": 2, 00:31:49.922 "num_base_bdevs_operational": 2, 00:31:49.922 "base_bdevs_list": [ 00:31:49.922 { 00:31:49.922 "name": "BaseBdev1", 00:31:49.922 "uuid": "c18f3e3b-8def-4525-aa03-168263ef9c9b", 00:31:49.922 "is_configured": true, 00:31:49.922 "data_offset": 256, 00:31:49.922 "data_size": 7936 00:31:49.922 }, 00:31:49.922 { 00:31:49.922 "name": "BaseBdev2", 00:31:49.922 "uuid": "3efc5a9d-0b9b-413a-988e-8809803f1d89", 00:31:49.922 "is_configured": true, 00:31:49.922 "data_offset": 256, 00:31:49.922 "data_size": 7936 00:31:49.922 } 00:31:49.922 ] 00:31:49.922 }' 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.922 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.490 [2024-11-20 07:29:14.534938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.490 "name": "Existed_Raid", 00:31:50.490 "aliases": [ 00:31:50.490 "857a75e0-c9aa-40f4-be73-32d6bf03973d" 00:31:50.490 ], 00:31:50.490 "product_name": "Raid Volume", 00:31:50.490 "block_size": 4096, 00:31:50.490 "num_blocks": 7936, 00:31:50.490 "uuid": "857a75e0-c9aa-40f4-be73-32d6bf03973d", 00:31:50.490 "assigned_rate_limits": { 00:31:50.490 "rw_ios_per_sec": 0, 00:31:50.490 "rw_mbytes_per_sec": 0, 00:31:50.490 "r_mbytes_per_sec": 0, 00:31:50.490 "w_mbytes_per_sec": 0 00:31:50.490 }, 00:31:50.490 "claimed": false, 00:31:50.490 "zoned": false, 00:31:50.490 "supported_io_types": { 00:31:50.490 "read": true, 00:31:50.490 "write": true, 00:31:50.490 "unmap": false, 00:31:50.490 "flush": false, 00:31:50.490 "reset": true, 00:31:50.490 "nvme_admin": false, 00:31:50.490 "nvme_io": false, 00:31:50.490 "nvme_io_md": false, 00:31:50.490 "write_zeroes": true, 00:31:50.490 "zcopy": false, 00:31:50.490 "get_zone_info": false, 00:31:50.490 "zone_management": false, 00:31:50.490 "zone_append": false, 00:31:50.490 "compare": false, 00:31:50.490 "compare_and_write": false, 00:31:50.490 "abort": false, 00:31:50.490 "seek_hole": false, 00:31:50.490 "seek_data": false, 00:31:50.490 "copy": false, 00:31:50.490 "nvme_iov_md": false 00:31:50.490 }, 00:31:50.490 "memory_domains": [ 00:31:50.490 { 00:31:50.490 "dma_device_id": "system", 00:31:50.490 "dma_device_type": 1 00:31:50.490 }, 00:31:50.490 { 00:31:50.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.490 "dma_device_type": 2 00:31:50.490 }, 00:31:50.490 { 00:31:50.490 "dma_device_id": "system", 00:31:50.490 "dma_device_type": 1 00:31:50.490 }, 00:31:50.490 { 00:31:50.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.490 "dma_device_type": 2 00:31:50.490 } 00:31:50.490 ], 00:31:50.490 "driver_specific": { 00:31:50.490 "raid": { 00:31:50.490 "uuid": "857a75e0-c9aa-40f4-be73-32d6bf03973d", 00:31:50.490 "strip_size_kb": 0, 00:31:50.490 "state": "online", 00:31:50.490 "raid_level": "raid1", 00:31:50.490 "superblock": true, 00:31:50.490 "num_base_bdevs": 2, 00:31:50.490 "num_base_bdevs_discovered": 2, 00:31:50.490 "num_base_bdevs_operational": 2, 00:31:50.490 "base_bdevs_list": [ 00:31:50.490 { 00:31:50.490 "name": "BaseBdev1", 00:31:50.490 "uuid": "c18f3e3b-8def-4525-aa03-168263ef9c9b", 00:31:50.490 "is_configured": true, 00:31:50.490 "data_offset": 256, 00:31:50.490 "data_size": 7936 00:31:50.490 }, 00:31:50.490 { 00:31:50.490 "name": "BaseBdev2", 00:31:50.490 "uuid": "3efc5a9d-0b9b-413a-988e-8809803f1d89", 00:31:50.490 "is_configured": true, 00:31:50.490 "data_offset": 256, 00:31:50.490 "data_size": 7936 00:31:50.490 } 00:31:50.490 ] 00:31:50.490 } 00:31:50.490 } 00:31:50.490 }' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:50.490 BaseBdev2' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.490 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.750 [2024-11-20 07:29:14.814759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.750 "name": "Existed_Raid", 00:31:50.750 "uuid": "857a75e0-c9aa-40f4-be73-32d6bf03973d", 00:31:50.750 "strip_size_kb": 0, 00:31:50.750 "state": "online", 00:31:50.750 "raid_level": "raid1", 00:31:50.750 "superblock": true, 00:31:50.750 "num_base_bdevs": 2, 00:31:50.750 "num_base_bdevs_discovered": 1, 00:31:50.750 "num_base_bdevs_operational": 1, 00:31:50.750 "base_bdevs_list": [ 00:31:50.750 { 00:31:50.750 "name": null, 00:31:50.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.750 "is_configured": false, 00:31:50.750 "data_offset": 0, 00:31:50.750 "data_size": 7936 00:31:50.750 }, 00:31:50.750 { 00:31:50.750 "name": "BaseBdev2", 00:31:50.750 "uuid": "3efc5a9d-0b9b-413a-988e-8809803f1d89", 00:31:50.750 "is_configured": true, 00:31:50.750 "data_offset": 256, 00:31:50.750 "data_size": 7936 00:31:50.750 } 00:31:50.750 ] 00:31:50.750 }' 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.750 07:29:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.318 [2024-11-20 07:29:15.473403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:51.318 [2024-11-20 07:29:15.473565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:51.318 [2024-11-20 07:29:15.549101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:51.318 [2024-11-20 07:29:15.549414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:51.318 [2024-11-20 07:29:15.549578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.318 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86528 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86528 ']' 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86528 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86528 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.576 killing process with pid 86528 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86528' 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86528 00:31:51.576 [2024-11-20 07:29:15.638921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:51.576 07:29:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86528 00:31:51.576 [2024-11-20 07:29:15.653772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:52.512 07:29:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:31:52.512 00:31:52.512 real 0m5.533s 00:31:52.512 user 0m8.477s 00:31:52.512 sys 0m0.834s 00:31:52.512 07:29:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.512 ************************************ 00:31:52.512 END TEST raid_state_function_test_sb_4k 00:31:52.512 ************************************ 00:31:52.512 07:29:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.512 07:29:16 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:31:52.512 07:29:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.512 07:29:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.512 07:29:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:52.512 ************************************ 00:31:52.512 START TEST raid_superblock_test_4k 00:31:52.512 ************************************ 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86782 00:31:52.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86782 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86782 ']' 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.512 07:29:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.512 [2024-11-20 07:29:16.750783] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:52.512 [2024-11-20 07:29:16.751360] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86782 ] 00:31:52.770 [2024-11-20 07:29:16.934250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.028 [2024-11-20 07:29:17.064726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.028 [2024-11-20 07:29:17.261543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:53.028 [2024-11-20 07:29:17.261602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.614 malloc1 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.614 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.614 [2024-11-20 07:29:17.722746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:53.614 [2024-11-20 07:29:17.722822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.614 [2024-11-20 07:29:17.722869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:53.614 [2024-11-20 07:29:17.722884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.615 [2024-11-20 07:29:17.726026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.615 [2024-11-20 07:29:17.726067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:53.615 pt1 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.615 malloc2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.615 [2024-11-20 07:29:17.777961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:53.615 [2024-11-20 07:29:17.778050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.615 [2024-11-20 07:29:17.778082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:53.615 [2024-11-20 07:29:17.778096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.615 [2024-11-20 07:29:17.780789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.615 [2024-11-20 07:29:17.781034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:53.615 pt2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.615 [2024-11-20 07:29:17.786042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:53.615 [2024-11-20 07:29:17.788444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:53.615 [2024-11-20 07:29:17.788849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:53.615 [2024-11-20 07:29:17.788993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:53.615 [2024-11-20 07:29:17.789306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:53.615 [2024-11-20 07:29:17.789712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:53.615 [2024-11-20 07:29:17.789909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:53.615 [2024-11-20 07:29:17.790335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:53.615 "name": "raid_bdev1", 00:31:53.615 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:53.615 "strip_size_kb": 0, 00:31:53.615 "state": "online", 00:31:53.615 "raid_level": "raid1", 00:31:53.615 "superblock": true, 00:31:53.615 "num_base_bdevs": 2, 00:31:53.615 "num_base_bdevs_discovered": 2, 00:31:53.615 "num_base_bdevs_operational": 2, 00:31:53.615 "base_bdevs_list": [ 00:31:53.615 { 00:31:53.615 "name": "pt1", 00:31:53.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:53.615 "is_configured": true, 00:31:53.615 "data_offset": 256, 00:31:53.615 "data_size": 7936 00:31:53.615 }, 00:31:53.615 { 00:31:53.615 "name": "pt2", 00:31:53.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:53.615 "is_configured": true, 00:31:53.615 "data_offset": 256, 00:31:53.615 "data_size": 7936 00:31:53.615 } 00:31:53.615 ] 00:31:53.615 }' 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:53.615 07:29:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.183 [2024-11-20 07:29:18.322773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:54.183 "name": "raid_bdev1", 00:31:54.183 "aliases": [ 00:31:54.183 "0100d127-f6f4-4ef1-8667-e7ef5024b3f1" 00:31:54.183 ], 00:31:54.183 "product_name": "Raid Volume", 00:31:54.183 "block_size": 4096, 00:31:54.183 "num_blocks": 7936, 00:31:54.183 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:54.183 "assigned_rate_limits": { 00:31:54.183 "rw_ios_per_sec": 0, 00:31:54.183 "rw_mbytes_per_sec": 0, 00:31:54.183 "r_mbytes_per_sec": 0, 00:31:54.183 "w_mbytes_per_sec": 0 00:31:54.183 }, 00:31:54.183 "claimed": false, 00:31:54.183 "zoned": false, 00:31:54.183 "supported_io_types": { 00:31:54.183 "read": true, 00:31:54.183 "write": true, 00:31:54.183 "unmap": false, 00:31:54.183 "flush": false, 00:31:54.183 "reset": true, 00:31:54.183 "nvme_admin": false, 00:31:54.183 "nvme_io": false, 00:31:54.183 "nvme_io_md": false, 00:31:54.183 "write_zeroes": true, 00:31:54.183 "zcopy": false, 00:31:54.183 "get_zone_info": false, 00:31:54.183 "zone_management": false, 00:31:54.183 "zone_append": false, 00:31:54.183 "compare": false, 00:31:54.183 "compare_and_write": false, 00:31:54.183 "abort": false, 00:31:54.183 "seek_hole": false, 00:31:54.183 "seek_data": false, 00:31:54.183 "copy": false, 00:31:54.183 "nvme_iov_md": false 00:31:54.183 }, 00:31:54.183 "memory_domains": [ 00:31:54.183 { 00:31:54.183 "dma_device_id": "system", 00:31:54.183 "dma_device_type": 1 00:31:54.183 }, 00:31:54.183 { 00:31:54.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.183 "dma_device_type": 2 00:31:54.183 }, 00:31:54.183 { 00:31:54.183 "dma_device_id": "system", 00:31:54.183 "dma_device_type": 1 00:31:54.183 }, 00:31:54.183 { 00:31:54.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.183 "dma_device_type": 2 00:31:54.183 } 00:31:54.183 ], 00:31:54.183 "driver_specific": { 00:31:54.183 "raid": { 00:31:54.183 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:54.183 "strip_size_kb": 0, 00:31:54.183 "state": "online", 00:31:54.183 "raid_level": "raid1", 00:31:54.183 "superblock": true, 00:31:54.183 "num_base_bdevs": 2, 00:31:54.183 "num_base_bdevs_discovered": 2, 00:31:54.183 "num_base_bdevs_operational": 2, 00:31:54.183 "base_bdevs_list": [ 00:31:54.183 { 00:31:54.183 "name": "pt1", 00:31:54.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:54.183 "is_configured": true, 00:31:54.183 "data_offset": 256, 00:31:54.183 "data_size": 7936 00:31:54.183 }, 00:31:54.183 { 00:31:54.183 "name": "pt2", 00:31:54.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:54.183 "is_configured": true, 00:31:54.183 "data_offset": 256, 00:31:54.183 "data_size": 7936 00:31:54.183 } 00:31:54.183 ] 00:31:54.183 } 00:31:54.183 } 00:31:54.183 }' 00:31:54.183 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:54.184 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:54.184 pt2' 00:31:54.184 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:54.184 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:31:54.184 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 [2024-11-20 07:29:18.594842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0100d127-f6f4-4ef1-8667-e7ef5024b3f1 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 0100d127-f6f4-4ef1-8667-e7ef5024b3f1 ']' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 [2024-11-20 07:29:18.646454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:54.443 [2024-11-20 07:29:18.646668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:54.443 [2024-11-20 07:29:18.646804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:54.443 [2024-11-20 07:29:18.646887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:54.443 [2024-11-20 07:29:18.646912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.443 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.703 [2024-11-20 07:29:18.790533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:54.703 [2024-11-20 07:29:18.793322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:54.703 [2024-11-20 07:29:18.793414] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:54.703 [2024-11-20 07:29:18.793527] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:54.703 [2024-11-20 07:29:18.793555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:54.703 [2024-11-20 07:29:18.793571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:54.703 request: 00:31:54.703 { 00:31:54.703 "name": "raid_bdev1", 00:31:54.703 "raid_level": "raid1", 00:31:54.703 "base_bdevs": [ 00:31:54.703 "malloc1", 00:31:54.703 "malloc2" 00:31:54.703 ], 00:31:54.703 "superblock": false, 00:31:54.703 "method": "bdev_raid_create", 00:31:54.703 "req_id": 1 00:31:54.703 } 00:31:54.703 Got JSON-RPC error response 00:31:54.703 response: 00:31:54.703 { 00:31:54.703 "code": -17, 00:31:54.703 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:54.703 } 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:31:54.703 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.704 [2024-11-20 07:29:18.862520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:54.704 [2024-11-20 07:29:18.862810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:54.704 [2024-11-20 07:29:18.862885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:54.704 [2024-11-20 07:29:18.863052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:54.704 [2024-11-20 07:29:18.866066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:54.704 [2024-11-20 07:29:18.866271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:54.704 [2024-11-20 07:29:18.866507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:54.704 [2024-11-20 07:29:18.866728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:54.704 pt1 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:54.704 "name": "raid_bdev1", 00:31:54.704 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:54.704 "strip_size_kb": 0, 00:31:54.704 "state": "configuring", 00:31:54.704 "raid_level": "raid1", 00:31:54.704 "superblock": true, 00:31:54.704 "num_base_bdevs": 2, 00:31:54.704 "num_base_bdevs_discovered": 1, 00:31:54.704 "num_base_bdevs_operational": 2, 00:31:54.704 "base_bdevs_list": [ 00:31:54.704 { 00:31:54.704 "name": "pt1", 00:31:54.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:54.704 "is_configured": true, 00:31:54.704 "data_offset": 256, 00:31:54.704 "data_size": 7936 00:31:54.704 }, 00:31:54.704 { 00:31:54.704 "name": null, 00:31:54.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:54.704 "is_configured": false, 00:31:54.704 "data_offset": 256, 00:31:54.704 "data_size": 7936 00:31:54.704 } 00:31:54.704 ] 00:31:54.704 }' 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:54.704 07:29:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.271 [2024-11-20 07:29:19.394767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:55.271 [2024-11-20 07:29:19.394868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:55.271 [2024-11-20 07:29:19.394901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:55.271 [2024-11-20 07:29:19.394918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:55.271 [2024-11-20 07:29:19.395652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:55.271 [2024-11-20 07:29:19.395732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:55.271 [2024-11-20 07:29:19.395838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:55.271 [2024-11-20 07:29:19.395882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:55.271 [2024-11-20 07:29:19.396046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:55.271 [2024-11-20 07:29:19.396067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:55.271 [2024-11-20 07:29:19.396366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:55.271 [2024-11-20 07:29:19.396576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:55.271 [2024-11-20 07:29:19.396609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:55.271 [2024-11-20 07:29:19.396821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:55.271 pt2 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:55.271 "name": "raid_bdev1", 00:31:55.271 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:55.271 "strip_size_kb": 0, 00:31:55.271 "state": "online", 00:31:55.271 "raid_level": "raid1", 00:31:55.271 "superblock": true, 00:31:55.271 "num_base_bdevs": 2, 00:31:55.271 "num_base_bdevs_discovered": 2, 00:31:55.271 "num_base_bdevs_operational": 2, 00:31:55.271 "base_bdevs_list": [ 00:31:55.271 { 00:31:55.271 "name": "pt1", 00:31:55.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:55.271 "is_configured": true, 00:31:55.271 "data_offset": 256, 00:31:55.271 "data_size": 7936 00:31:55.271 }, 00:31:55.271 { 00:31:55.271 "name": "pt2", 00:31:55.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:55.271 "is_configured": true, 00:31:55.271 "data_offset": 256, 00:31:55.271 "data_size": 7936 00:31:55.271 } 00:31:55.271 ] 00:31:55.271 }' 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:55.271 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:55.838 [2024-11-20 07:29:19.935307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:55.838 "name": "raid_bdev1", 00:31:55.838 "aliases": [ 00:31:55.838 "0100d127-f6f4-4ef1-8667-e7ef5024b3f1" 00:31:55.838 ], 00:31:55.838 "product_name": "Raid Volume", 00:31:55.838 "block_size": 4096, 00:31:55.838 "num_blocks": 7936, 00:31:55.838 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:55.838 "assigned_rate_limits": { 00:31:55.838 "rw_ios_per_sec": 0, 00:31:55.838 "rw_mbytes_per_sec": 0, 00:31:55.838 "r_mbytes_per_sec": 0, 00:31:55.838 "w_mbytes_per_sec": 0 00:31:55.838 }, 00:31:55.838 "claimed": false, 00:31:55.838 "zoned": false, 00:31:55.838 "supported_io_types": { 00:31:55.838 "read": true, 00:31:55.838 "write": true, 00:31:55.838 "unmap": false, 00:31:55.838 "flush": false, 00:31:55.838 "reset": true, 00:31:55.838 "nvme_admin": false, 00:31:55.838 "nvme_io": false, 00:31:55.838 "nvme_io_md": false, 00:31:55.838 "write_zeroes": true, 00:31:55.838 "zcopy": false, 00:31:55.838 "get_zone_info": false, 00:31:55.838 "zone_management": false, 00:31:55.838 "zone_append": false, 00:31:55.838 "compare": false, 00:31:55.838 "compare_and_write": false, 00:31:55.838 "abort": false, 00:31:55.838 "seek_hole": false, 00:31:55.838 "seek_data": false, 00:31:55.838 "copy": false, 00:31:55.838 "nvme_iov_md": false 00:31:55.838 }, 00:31:55.838 "memory_domains": [ 00:31:55.838 { 00:31:55.838 "dma_device_id": "system", 00:31:55.838 "dma_device_type": 1 00:31:55.838 }, 00:31:55.838 { 00:31:55.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.838 "dma_device_type": 2 00:31:55.838 }, 00:31:55.838 { 00:31:55.838 "dma_device_id": "system", 00:31:55.838 "dma_device_type": 1 00:31:55.838 }, 00:31:55.838 { 00:31:55.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.838 "dma_device_type": 2 00:31:55.838 } 00:31:55.838 ], 00:31:55.838 "driver_specific": { 00:31:55.838 "raid": { 00:31:55.838 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:55.838 "strip_size_kb": 0, 00:31:55.838 "state": "online", 00:31:55.838 "raid_level": "raid1", 00:31:55.838 "superblock": true, 00:31:55.838 "num_base_bdevs": 2, 00:31:55.838 "num_base_bdevs_discovered": 2, 00:31:55.838 "num_base_bdevs_operational": 2, 00:31:55.838 "base_bdevs_list": [ 00:31:55.838 { 00:31:55.838 "name": "pt1", 00:31:55.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:55.838 "is_configured": true, 00:31:55.838 "data_offset": 256, 00:31:55.838 "data_size": 7936 00:31:55.838 }, 00:31:55.838 { 00:31:55.838 "name": "pt2", 00:31:55.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:55.838 "is_configured": true, 00:31:55.838 "data_offset": 256, 00:31:55.838 "data_size": 7936 00:31:55.838 } 00:31:55.838 ] 00:31:55.838 } 00:31:55.838 } 00:31:55.838 }' 00:31:55.838 07:29:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:55.838 pt2' 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.838 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.098 [2024-11-20 07:29:20.207412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 0100d127-f6f4-4ef1-8667-e7ef5024b3f1 '!=' 0100d127-f6f4-4ef1-8667-e7ef5024b3f1 ']' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.098 [2024-11-20 07:29:20.259154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.098 "name": "raid_bdev1", 00:31:56.098 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:56.098 "strip_size_kb": 0, 00:31:56.098 "state": "online", 00:31:56.098 "raid_level": "raid1", 00:31:56.098 "superblock": true, 00:31:56.098 "num_base_bdevs": 2, 00:31:56.098 "num_base_bdevs_discovered": 1, 00:31:56.098 "num_base_bdevs_operational": 1, 00:31:56.098 "base_bdevs_list": [ 00:31:56.098 { 00:31:56.098 "name": null, 00:31:56.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.098 "is_configured": false, 00:31:56.098 "data_offset": 0, 00:31:56.098 "data_size": 7936 00:31:56.098 }, 00:31:56.098 { 00:31:56.098 "name": "pt2", 00:31:56.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:56.098 "is_configured": true, 00:31:56.098 "data_offset": 256, 00:31:56.098 "data_size": 7936 00:31:56.098 } 00:31:56.098 ] 00:31:56.098 }' 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.098 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.668 [2024-11-20 07:29:20.795250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:56.668 [2024-11-20 07:29:20.795287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:56.668 [2024-11-20 07:29:20.795425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:56.668 [2024-11-20 07:29:20.795504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:56.668 [2024-11-20 07:29:20.795523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.668 [2024-11-20 07:29:20.871272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:56.668 [2024-11-20 07:29:20.871389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.668 [2024-11-20 07:29:20.871418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:56.668 [2024-11-20 07:29:20.871435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.668 [2024-11-20 07:29:20.874330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.668 [2024-11-20 07:29:20.874391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:56.668 [2024-11-20 07:29:20.874499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:56.668 [2024-11-20 07:29:20.874562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:56.668 [2024-11-20 07:29:20.874761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:56.668 [2024-11-20 07:29:20.874785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:56.668 [2024-11-20 07:29:20.875136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:56.668 [2024-11-20 07:29:20.875368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:56.668 [2024-11-20 07:29:20.875398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:56.668 [2024-11-20 07:29:20.875842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.668 pt2 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.668 "name": "raid_bdev1", 00:31:56.668 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:56.668 "strip_size_kb": 0, 00:31:56.668 "state": "online", 00:31:56.668 "raid_level": "raid1", 00:31:56.668 "superblock": true, 00:31:56.668 "num_base_bdevs": 2, 00:31:56.668 "num_base_bdevs_discovered": 1, 00:31:56.668 "num_base_bdevs_operational": 1, 00:31:56.668 "base_bdevs_list": [ 00:31:56.668 { 00:31:56.668 "name": null, 00:31:56.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.668 "is_configured": false, 00:31:56.668 "data_offset": 256, 00:31:56.668 "data_size": 7936 00:31:56.668 }, 00:31:56.668 { 00:31:56.668 "name": "pt2", 00:31:56.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:56.668 "is_configured": true, 00:31:56.668 "data_offset": 256, 00:31:56.668 "data_size": 7936 00:31:56.668 } 00:31:56.668 ] 00:31:56.668 }' 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.668 07:29:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.237 [2024-11-20 07:29:21.403787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:57.237 [2024-11-20 07:29:21.403825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:57.237 [2024-11-20 07:29:21.403921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.237 [2024-11-20 07:29:21.404055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:57.237 [2024-11-20 07:29:21.404070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.237 [2024-11-20 07:29:21.471853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:57.237 [2024-11-20 07:29:21.471961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.237 [2024-11-20 07:29:21.471993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:31:57.237 [2024-11-20 07:29:21.472007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.237 [2024-11-20 07:29:21.474980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.237 [2024-11-20 07:29:21.475042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:57.237 [2024-11-20 07:29:21.475176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:57.237 [2024-11-20 07:29:21.475240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:57.237 [2024-11-20 07:29:21.475461] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:57.237 [2024-11-20 07:29:21.475493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:57.237 [2024-11-20 07:29:21.475541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:57.237 [2024-11-20 07:29:21.475628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:57.237 [2024-11-20 07:29:21.475788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:57.237 [2024-11-20 07:29:21.475806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:57.237 [2024-11-20 07:29:21.476141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:57.237 [2024-11-20 07:29:21.476341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:57.237 [2024-11-20 07:29:21.476360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:57.237 [2024-11-20 07:29:21.476647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.237 pt1 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.237 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.496 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.496 "name": "raid_bdev1", 00:31:57.496 "uuid": "0100d127-f6f4-4ef1-8667-e7ef5024b3f1", 00:31:57.496 "strip_size_kb": 0, 00:31:57.496 "state": "online", 00:31:57.496 "raid_level": "raid1", 00:31:57.496 "superblock": true, 00:31:57.496 "num_base_bdevs": 2, 00:31:57.496 "num_base_bdevs_discovered": 1, 00:31:57.496 "num_base_bdevs_operational": 1, 00:31:57.496 "base_bdevs_list": [ 00:31:57.496 { 00:31:57.496 "name": null, 00:31:57.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.496 "is_configured": false, 00:31:57.496 "data_offset": 256, 00:31:57.496 "data_size": 7936 00:31:57.496 }, 00:31:57.496 { 00:31:57.496 "name": "pt2", 00:31:57.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.496 "is_configured": true, 00:31:57.496 "data_offset": 256, 00:31:57.496 "data_size": 7936 00:31:57.496 } 00:31:57.496 ] 00:31:57.496 }' 00:31:57.497 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.497 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.756 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:57.756 07:29:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:57.756 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.756 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.756 07:29:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.756 07:29:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:57.756 07:29:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:57.756 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.756 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.756 07:29:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:57.756 [2024-11-20 07:29:22.020288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.756 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 0100d127-f6f4-4ef1-8667-e7ef5024b3f1 '!=' 0100d127-f6f4-4ef1-8667-e7ef5024b3f1 ']' 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86782 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86782 ']' 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86782 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86782 00:31:58.015 killing process with pid 86782 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86782' 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86782 00:31:58.015 [2024-11-20 07:29:22.097973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:58.015 07:29:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86782 00:31:58.015 [2024-11-20 07:29:22.098085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:58.015 [2024-11-20 07:29:22.098147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:58.015 [2024-11-20 07:29:22.098168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:58.015 [2024-11-20 07:29:22.274410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:59.393 ************************************ 00:31:59.393 END TEST raid_superblock_test_4k 00:31:59.393 ************************************ 00:31:59.393 07:29:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:31:59.393 00:31:59.393 real 0m6.621s 00:31:59.393 user 0m10.509s 00:31:59.393 sys 0m1.008s 00:31:59.393 07:29:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.393 07:29:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.393 07:29:23 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:31:59.393 07:29:23 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:31:59.393 07:29:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:59.393 07:29:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.393 07:29:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:59.393 ************************************ 00:31:59.393 START TEST raid_rebuild_test_sb_4k 00:31:59.393 ************************************ 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:59.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87109 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87109 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87109 ']' 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.393 07:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.393 [2024-11-20 07:29:23.438928] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:59.393 [2024-11-20 07:29:23.439301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:31:59.393 Zero copy mechanism will not be used. 00:31:59.393 -allocations --file-prefix=spdk_pid87109 ] 00:31:59.393 [2024-11-20 07:29:23.631995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.652 [2024-11-20 07:29:23.793748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.911 [2024-11-20 07:29:24.017158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:59.911 [2024-11-20 07:29:24.017606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:00.168 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.168 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:32:00.168 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:00.168 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:32:00.168 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.168 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 BaseBdev1_malloc 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 [2024-11-20 07:29:24.481668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:00.427 [2024-11-20 07:29:24.481771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.427 [2024-11-20 07:29:24.481811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:00.427 [2024-11-20 07:29:24.481836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.427 [2024-11-20 07:29:24.484751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.427 [2024-11-20 07:29:24.484805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:00.427 BaseBdev1 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 BaseBdev2_malloc 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 [2024-11-20 07:29:24.534120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:00.427 [2024-11-20 07:29:24.534196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.427 [2024-11-20 07:29:24.534225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:00.427 [2024-11-20 07:29:24.534245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.427 [2024-11-20 07:29:24.537219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.427 [2024-11-20 07:29:24.537272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:00.427 BaseBdev2 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 spare_malloc 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 spare_delay 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 [2024-11-20 07:29:24.603450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:00.427 [2024-11-20 07:29:24.603539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.427 [2024-11-20 07:29:24.603601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:00.427 [2024-11-20 07:29:24.603630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.427 [2024-11-20 07:29:24.606694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.427 [2024-11-20 07:29:24.606754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:00.427 spare 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.427 [2024-11-20 07:29:24.611727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:00.427 [2024-11-20 07:29:24.614424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:00.427 [2024-11-20 07:29:24.614720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:00.427 [2024-11-20 07:29:24.614748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:00.427 [2024-11-20 07:29:24.615080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:00.427 [2024-11-20 07:29:24.615502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:00.427 [2024-11-20 07:29:24.615528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:00.427 [2024-11-20 07:29:24.615790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.427 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:00.428 "name": "raid_bdev1", 00:32:00.428 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:00.428 "strip_size_kb": 0, 00:32:00.428 "state": "online", 00:32:00.428 "raid_level": "raid1", 00:32:00.428 "superblock": true, 00:32:00.428 "num_base_bdevs": 2, 00:32:00.428 "num_base_bdevs_discovered": 2, 00:32:00.428 "num_base_bdevs_operational": 2, 00:32:00.428 "base_bdevs_list": [ 00:32:00.428 { 00:32:00.428 "name": "BaseBdev1", 00:32:00.428 "uuid": "b7dce74e-c2dc-5c73-8032-469fdf99cabc", 00:32:00.428 "is_configured": true, 00:32:00.428 "data_offset": 256, 00:32:00.428 "data_size": 7936 00:32:00.428 }, 00:32:00.428 { 00:32:00.428 "name": "BaseBdev2", 00:32:00.428 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:00.428 "is_configured": true, 00:32:00.428 "data_offset": 256, 00:32:00.428 "data_size": 7936 00:32:00.428 } 00:32:00.428 ] 00:32:00.428 }' 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:00.428 07:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.994 [2024-11-20 07:29:25.144275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:00.994 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:01.252 [2024-11-20 07:29:25.536083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:01.512 /dev/nbd0 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:01.512 1+0 records in 00:32:01.512 1+0 records out 00:32:01.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279829 s, 14.6 MB/s 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:01.512 07:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:02.454 7936+0 records in 00:32:02.454 7936+0 records out 00:32:02.454 32505856 bytes (33 MB, 31 MiB) copied, 0.882213 s, 36.8 MB/s 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:02.454 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:02.722 [2024-11-20 07:29:26.781306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:02.722 [2024-11-20 07:29:26.793528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:02.722 "name": "raid_bdev1", 00:32:02.722 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:02.722 "strip_size_kb": 0, 00:32:02.722 "state": "online", 00:32:02.722 "raid_level": "raid1", 00:32:02.722 "superblock": true, 00:32:02.722 "num_base_bdevs": 2, 00:32:02.722 "num_base_bdevs_discovered": 1, 00:32:02.722 "num_base_bdevs_operational": 1, 00:32:02.722 "base_bdevs_list": [ 00:32:02.722 { 00:32:02.722 "name": null, 00:32:02.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.722 "is_configured": false, 00:32:02.722 "data_offset": 0, 00:32:02.722 "data_size": 7936 00:32:02.722 }, 00:32:02.722 { 00:32:02.722 "name": "BaseBdev2", 00:32:02.722 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:02.722 "is_configured": true, 00:32:02.722 "data_offset": 256, 00:32:02.722 "data_size": 7936 00:32:02.722 } 00:32:02.722 ] 00:32:02.722 }' 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:02.722 07:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.291 07:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:03.291 07:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.291 07:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.291 [2024-11-20 07:29:27.289704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:03.291 [2024-11-20 07:29:27.306689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:32:03.291 07:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.291 07:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:03.291 [2024-11-20 07:29:27.309246] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.227 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:04.227 "name": "raid_bdev1", 00:32:04.227 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:04.227 "strip_size_kb": 0, 00:32:04.227 "state": "online", 00:32:04.227 "raid_level": "raid1", 00:32:04.227 "superblock": true, 00:32:04.227 "num_base_bdevs": 2, 00:32:04.227 "num_base_bdevs_discovered": 2, 00:32:04.227 "num_base_bdevs_operational": 2, 00:32:04.227 "process": { 00:32:04.227 "type": "rebuild", 00:32:04.227 "target": "spare", 00:32:04.227 "progress": { 00:32:04.227 "blocks": 2560, 00:32:04.227 "percent": 32 00:32:04.227 } 00:32:04.227 }, 00:32:04.227 "base_bdevs_list": [ 00:32:04.227 { 00:32:04.227 "name": "spare", 00:32:04.227 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:04.227 "is_configured": true, 00:32:04.227 "data_offset": 256, 00:32:04.227 "data_size": 7936 00:32:04.227 }, 00:32:04.227 { 00:32:04.227 "name": "BaseBdev2", 00:32:04.227 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:04.227 "is_configured": true, 00:32:04.227 "data_offset": 256, 00:32:04.227 "data_size": 7936 00:32:04.227 } 00:32:04.227 ] 00:32:04.227 }' 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.228 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:04.228 [2024-11-20 07:29:28.471090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:04.486 [2024-11-20 07:29:28.519074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:04.486 [2024-11-20 07:29:28.519315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:04.486 [2024-11-20 07:29:28.519613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:04.486 [2024-11-20 07:29:28.519747] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:04.486 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.486 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:04.486 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:04.486 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:04.486 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:04.487 "name": "raid_bdev1", 00:32:04.487 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:04.487 "strip_size_kb": 0, 00:32:04.487 "state": "online", 00:32:04.487 "raid_level": "raid1", 00:32:04.487 "superblock": true, 00:32:04.487 "num_base_bdevs": 2, 00:32:04.487 "num_base_bdevs_discovered": 1, 00:32:04.487 "num_base_bdevs_operational": 1, 00:32:04.487 "base_bdevs_list": [ 00:32:04.487 { 00:32:04.487 "name": null, 00:32:04.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.487 "is_configured": false, 00:32:04.487 "data_offset": 0, 00:32:04.487 "data_size": 7936 00:32:04.487 }, 00:32:04.487 { 00:32:04.487 "name": "BaseBdev2", 00:32:04.487 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:04.487 "is_configured": true, 00:32:04.487 "data_offset": 256, 00:32:04.487 "data_size": 7936 00:32:04.487 } 00:32:04.487 ] 00:32:04.487 }' 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:04.487 07:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.055 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.055 "name": "raid_bdev1", 00:32:05.055 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:05.055 "strip_size_kb": 0, 00:32:05.055 "state": "online", 00:32:05.055 "raid_level": "raid1", 00:32:05.055 "superblock": true, 00:32:05.055 "num_base_bdevs": 2, 00:32:05.056 "num_base_bdevs_discovered": 1, 00:32:05.056 "num_base_bdevs_operational": 1, 00:32:05.056 "base_bdevs_list": [ 00:32:05.056 { 00:32:05.056 "name": null, 00:32:05.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.056 "is_configured": false, 00:32:05.056 "data_offset": 0, 00:32:05.056 "data_size": 7936 00:32:05.056 }, 00:32:05.056 { 00:32:05.056 "name": "BaseBdev2", 00:32:05.056 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:05.056 "is_configured": true, 00:32:05.056 "data_offset": 256, 00:32:05.056 "data_size": 7936 00:32:05.056 } 00:32:05.056 ] 00:32:05.056 }' 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.056 [2024-11-20 07:29:29.260015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:05.056 [2024-11-20 07:29:29.277668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.056 07:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:05.056 [2024-11-20 07:29:29.280339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:06.433 "name": "raid_bdev1", 00:32:06.433 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:06.433 "strip_size_kb": 0, 00:32:06.433 "state": "online", 00:32:06.433 "raid_level": "raid1", 00:32:06.433 "superblock": true, 00:32:06.433 "num_base_bdevs": 2, 00:32:06.433 "num_base_bdevs_discovered": 2, 00:32:06.433 "num_base_bdevs_operational": 2, 00:32:06.433 "process": { 00:32:06.433 "type": "rebuild", 00:32:06.433 "target": "spare", 00:32:06.433 "progress": { 00:32:06.433 "blocks": 2560, 00:32:06.433 "percent": 32 00:32:06.433 } 00:32:06.433 }, 00:32:06.433 "base_bdevs_list": [ 00:32:06.433 { 00:32:06.433 "name": "spare", 00:32:06.433 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:06.433 "is_configured": true, 00:32:06.433 "data_offset": 256, 00:32:06.433 "data_size": 7936 00:32:06.433 }, 00:32:06.433 { 00:32:06.433 "name": "BaseBdev2", 00:32:06.433 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:06.433 "is_configured": true, 00:32:06.433 "data_offset": 256, 00:32:06.433 "data_size": 7936 00:32:06.433 } 00:32:06.433 ] 00:32:06.433 }' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:06.433 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:06.433 "name": "raid_bdev1", 00:32:06.433 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:06.433 "strip_size_kb": 0, 00:32:06.433 "state": "online", 00:32:06.433 "raid_level": "raid1", 00:32:06.433 "superblock": true, 00:32:06.433 "num_base_bdevs": 2, 00:32:06.433 "num_base_bdevs_discovered": 2, 00:32:06.433 "num_base_bdevs_operational": 2, 00:32:06.433 "process": { 00:32:06.433 "type": "rebuild", 00:32:06.433 "target": "spare", 00:32:06.433 "progress": { 00:32:06.433 "blocks": 2816, 00:32:06.433 "percent": 35 00:32:06.433 } 00:32:06.433 }, 00:32:06.433 "base_bdevs_list": [ 00:32:06.433 { 00:32:06.433 "name": "spare", 00:32:06.433 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:06.433 "is_configured": true, 00:32:06.433 "data_offset": 256, 00:32:06.433 "data_size": 7936 00:32:06.433 }, 00:32:06.433 { 00:32:06.433 "name": "BaseBdev2", 00:32:06.433 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:06.433 "is_configured": true, 00:32:06.433 "data_offset": 256, 00:32:06.433 "data_size": 7936 00:32:06.433 } 00:32:06.433 ] 00:32:06.433 }' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:06.433 07:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.370 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:07.370 "name": "raid_bdev1", 00:32:07.370 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:07.370 "strip_size_kb": 0, 00:32:07.370 "state": "online", 00:32:07.370 "raid_level": "raid1", 00:32:07.370 "superblock": true, 00:32:07.370 "num_base_bdevs": 2, 00:32:07.370 "num_base_bdevs_discovered": 2, 00:32:07.370 "num_base_bdevs_operational": 2, 00:32:07.370 "process": { 00:32:07.370 "type": "rebuild", 00:32:07.370 "target": "spare", 00:32:07.370 "progress": { 00:32:07.370 "blocks": 5888, 00:32:07.370 "percent": 74 00:32:07.370 } 00:32:07.370 }, 00:32:07.370 "base_bdevs_list": [ 00:32:07.370 { 00:32:07.370 "name": "spare", 00:32:07.370 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:07.370 "is_configured": true, 00:32:07.370 "data_offset": 256, 00:32:07.370 "data_size": 7936 00:32:07.370 }, 00:32:07.370 { 00:32:07.370 "name": "BaseBdev2", 00:32:07.370 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:07.370 "is_configured": true, 00:32:07.370 "data_offset": 256, 00:32:07.370 "data_size": 7936 00:32:07.370 } 00:32:07.370 ] 00:32:07.371 }' 00:32:07.371 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:07.629 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:07.629 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:07.629 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:07.629 07:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:08.196 [2024-11-20 07:29:32.404501] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:08.196 [2024-11-20 07:29:32.404574] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:08.196 [2024-11-20 07:29:32.404777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:08.764 "name": "raid_bdev1", 00:32:08.764 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:08.764 "strip_size_kb": 0, 00:32:08.764 "state": "online", 00:32:08.764 "raid_level": "raid1", 00:32:08.764 "superblock": true, 00:32:08.764 "num_base_bdevs": 2, 00:32:08.764 "num_base_bdevs_discovered": 2, 00:32:08.764 "num_base_bdevs_operational": 2, 00:32:08.764 "base_bdevs_list": [ 00:32:08.764 { 00:32:08.764 "name": "spare", 00:32:08.764 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:08.764 "is_configured": true, 00:32:08.764 "data_offset": 256, 00:32:08.764 "data_size": 7936 00:32:08.764 }, 00:32:08.764 { 00:32:08.764 "name": "BaseBdev2", 00:32:08.764 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:08.764 "is_configured": true, 00:32:08.764 "data_offset": 256, 00:32:08.764 "data_size": 7936 00:32:08.764 } 00:32:08.764 ] 00:32:08.764 }' 00:32:08.764 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:08.765 "name": "raid_bdev1", 00:32:08.765 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:08.765 "strip_size_kb": 0, 00:32:08.765 "state": "online", 00:32:08.765 "raid_level": "raid1", 00:32:08.765 "superblock": true, 00:32:08.765 "num_base_bdevs": 2, 00:32:08.765 "num_base_bdevs_discovered": 2, 00:32:08.765 "num_base_bdevs_operational": 2, 00:32:08.765 "base_bdevs_list": [ 00:32:08.765 { 00:32:08.765 "name": "spare", 00:32:08.765 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:08.765 "is_configured": true, 00:32:08.765 "data_offset": 256, 00:32:08.765 "data_size": 7936 00:32:08.765 }, 00:32:08.765 { 00:32:08.765 "name": "BaseBdev2", 00:32:08.765 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:08.765 "is_configured": true, 00:32:08.765 "data_offset": 256, 00:32:08.765 "data_size": 7936 00:32:08.765 } 00:32:08.765 ] 00:32:08.765 }' 00:32:08.765 07:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:08.765 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:08.765 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:09.024 "name": "raid_bdev1", 00:32:09.024 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:09.024 "strip_size_kb": 0, 00:32:09.024 "state": "online", 00:32:09.024 "raid_level": "raid1", 00:32:09.024 "superblock": true, 00:32:09.024 "num_base_bdevs": 2, 00:32:09.024 "num_base_bdevs_discovered": 2, 00:32:09.024 "num_base_bdevs_operational": 2, 00:32:09.024 "base_bdevs_list": [ 00:32:09.024 { 00:32:09.024 "name": "spare", 00:32:09.024 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:09.024 "is_configured": true, 00:32:09.024 "data_offset": 256, 00:32:09.024 "data_size": 7936 00:32:09.024 }, 00:32:09.024 { 00:32:09.024 "name": "BaseBdev2", 00:32:09.024 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:09.024 "is_configured": true, 00:32:09.024 "data_offset": 256, 00:32:09.024 "data_size": 7936 00:32:09.024 } 00:32:09.024 ] 00:32:09.024 }' 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:09.024 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.591 [2024-11-20 07:29:33.587741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:09.591 [2024-11-20 07:29:33.587778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:09.591 [2024-11-20 07:29:33.587872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:09.591 [2024-11-20 07:29:33.588023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:09.591 [2024-11-20 07:29:33.588041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:09.591 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:09.592 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:09.850 /dev/nbd0 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.850 1+0 records in 00:32:09.850 1+0 records out 00:32:09.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004229 s, 9.7 MB/s 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:09.850 07:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:10.109 /dev/nbd1 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:10.109 1+0 records in 00:32:10.109 1+0 records out 00:32:10.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438172 s, 9.3 MB/s 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:10.109 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:32:10.110 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:10.110 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:10.110 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:32:10.110 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:10.110 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:10.110 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.369 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.628 07:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:10.887 [2024-11-20 07:29:35.117761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:10.887 [2024-11-20 07:29:35.117835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.887 [2024-11-20 07:29:35.117881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:10.887 [2024-11-20 07:29:35.117896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.887 [2024-11-20 07:29:35.120779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.887 [2024-11-20 07:29:35.120979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:10.887 [2024-11-20 07:29:35.121105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:10.887 [2024-11-20 07:29:35.121175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:10.887 [2024-11-20 07:29:35.121365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:10.887 spare 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.887 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.147 [2024-11-20 07:29:35.221492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:11.147 [2024-11-20 07:29:35.221526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:11.147 [2024-11-20 07:29:35.221920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:32:11.147 [2024-11-20 07:29:35.222200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:11.147 [2024-11-20 07:29:35.222224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:11.147 [2024-11-20 07:29:35.222475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.147 "name": "raid_bdev1", 00:32:11.147 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:11.147 "strip_size_kb": 0, 00:32:11.147 "state": "online", 00:32:11.147 "raid_level": "raid1", 00:32:11.147 "superblock": true, 00:32:11.147 "num_base_bdevs": 2, 00:32:11.147 "num_base_bdevs_discovered": 2, 00:32:11.147 "num_base_bdevs_operational": 2, 00:32:11.147 "base_bdevs_list": [ 00:32:11.147 { 00:32:11.147 "name": "spare", 00:32:11.147 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:11.147 "is_configured": true, 00:32:11.147 "data_offset": 256, 00:32:11.147 "data_size": 7936 00:32:11.147 }, 00:32:11.147 { 00:32:11.147 "name": "BaseBdev2", 00:32:11.147 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:11.147 "is_configured": true, 00:32:11.147 "data_offset": 256, 00:32:11.147 "data_size": 7936 00:32:11.147 } 00:32:11.147 ] 00:32:11.147 }' 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.147 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:11.715 "name": "raid_bdev1", 00:32:11.715 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:11.715 "strip_size_kb": 0, 00:32:11.715 "state": "online", 00:32:11.715 "raid_level": "raid1", 00:32:11.715 "superblock": true, 00:32:11.715 "num_base_bdevs": 2, 00:32:11.715 "num_base_bdevs_discovered": 2, 00:32:11.715 "num_base_bdevs_operational": 2, 00:32:11.715 "base_bdevs_list": [ 00:32:11.715 { 00:32:11.715 "name": "spare", 00:32:11.715 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:11.715 "is_configured": true, 00:32:11.715 "data_offset": 256, 00:32:11.715 "data_size": 7936 00:32:11.715 }, 00:32:11.715 { 00:32:11.715 "name": "BaseBdev2", 00:32:11.715 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:11.715 "is_configured": true, 00:32:11.715 "data_offset": 256, 00:32:11.715 "data_size": 7936 00:32:11.715 } 00:32:11.715 ] 00:32:11.715 }' 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.715 07:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.975 [2024-11-20 07:29:36.006682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.975 "name": "raid_bdev1", 00:32:11.975 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:11.975 "strip_size_kb": 0, 00:32:11.975 "state": "online", 00:32:11.975 "raid_level": "raid1", 00:32:11.975 "superblock": true, 00:32:11.975 "num_base_bdevs": 2, 00:32:11.975 "num_base_bdevs_discovered": 1, 00:32:11.975 "num_base_bdevs_operational": 1, 00:32:11.975 "base_bdevs_list": [ 00:32:11.975 { 00:32:11.975 "name": null, 00:32:11.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.975 "is_configured": false, 00:32:11.975 "data_offset": 0, 00:32:11.975 "data_size": 7936 00:32:11.975 }, 00:32:11.975 { 00:32:11.975 "name": "BaseBdev2", 00:32:11.975 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:11.975 "is_configured": true, 00:32:11.975 "data_offset": 256, 00:32:11.975 "data_size": 7936 00:32:11.975 } 00:32:11.975 ] 00:32:11.975 }' 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.975 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:12.542 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:12.542 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.542 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:12.542 [2024-11-20 07:29:36.530889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:12.542 [2024-11-20 07:29:36.531204] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:12.542 [2024-11-20 07:29:36.531232] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:12.542 [2024-11-20 07:29:36.531280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:12.542 [2024-11-20 07:29:36.546794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:32:12.542 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.542 07:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:12.542 [2024-11-20 07:29:36.549291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.479 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:13.479 "name": "raid_bdev1", 00:32:13.479 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:13.479 "strip_size_kb": 0, 00:32:13.479 "state": "online", 00:32:13.479 "raid_level": "raid1", 00:32:13.479 "superblock": true, 00:32:13.479 "num_base_bdevs": 2, 00:32:13.479 "num_base_bdevs_discovered": 2, 00:32:13.479 "num_base_bdevs_operational": 2, 00:32:13.479 "process": { 00:32:13.479 "type": "rebuild", 00:32:13.479 "target": "spare", 00:32:13.479 "progress": { 00:32:13.479 "blocks": 2560, 00:32:13.479 "percent": 32 00:32:13.479 } 00:32:13.479 }, 00:32:13.479 "base_bdevs_list": [ 00:32:13.479 { 00:32:13.479 "name": "spare", 00:32:13.480 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:13.480 "is_configured": true, 00:32:13.480 "data_offset": 256, 00:32:13.480 "data_size": 7936 00:32:13.480 }, 00:32:13.480 { 00:32:13.480 "name": "BaseBdev2", 00:32:13.480 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:13.480 "is_configured": true, 00:32:13.480 "data_offset": 256, 00:32:13.480 "data_size": 7936 00:32:13.480 } 00:32:13.480 ] 00:32:13.480 }' 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.480 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.480 [2024-11-20 07:29:37.722635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:13.480 [2024-11-20 07:29:37.758419] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:13.480 [2024-11-20 07:29:37.758515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.480 [2024-11-20 07:29:37.758536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:13.480 [2024-11-20 07:29:37.758549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.738 "name": "raid_bdev1", 00:32:13.738 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:13.738 "strip_size_kb": 0, 00:32:13.738 "state": "online", 00:32:13.738 "raid_level": "raid1", 00:32:13.738 "superblock": true, 00:32:13.738 "num_base_bdevs": 2, 00:32:13.738 "num_base_bdevs_discovered": 1, 00:32:13.738 "num_base_bdevs_operational": 1, 00:32:13.738 "base_bdevs_list": [ 00:32:13.738 { 00:32:13.738 "name": null, 00:32:13.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.738 "is_configured": false, 00:32:13.738 "data_offset": 0, 00:32:13.738 "data_size": 7936 00:32:13.738 }, 00:32:13.738 { 00:32:13.738 "name": "BaseBdev2", 00:32:13.738 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:13.738 "is_configured": true, 00:32:13.738 "data_offset": 256, 00:32:13.738 "data_size": 7936 00:32:13.738 } 00:32:13.738 ] 00:32:13.738 }' 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.738 07:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.306 07:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:14.306 07:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.306 07:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.307 [2024-11-20 07:29:38.306935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:14.307 [2024-11-20 07:29:38.307204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:14.307 [2024-11-20 07:29:38.307246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:14.307 [2024-11-20 07:29:38.307266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:14.307 [2024-11-20 07:29:38.307989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:14.307 [2024-11-20 07:29:38.308032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:14.307 [2024-11-20 07:29:38.308161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:14.307 [2024-11-20 07:29:38.308185] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:14.307 [2024-11-20 07:29:38.308202] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:14.307 [2024-11-20 07:29:38.308234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:14.307 [2024-11-20 07:29:38.323824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:32:14.307 spare 00:32:14.307 07:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.307 07:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:14.307 [2024-11-20 07:29:38.326356] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:15.243 "name": "raid_bdev1", 00:32:15.243 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:15.243 "strip_size_kb": 0, 00:32:15.243 "state": "online", 00:32:15.243 "raid_level": "raid1", 00:32:15.243 "superblock": true, 00:32:15.243 "num_base_bdevs": 2, 00:32:15.243 "num_base_bdevs_discovered": 2, 00:32:15.243 "num_base_bdevs_operational": 2, 00:32:15.243 "process": { 00:32:15.243 "type": "rebuild", 00:32:15.243 "target": "spare", 00:32:15.243 "progress": { 00:32:15.243 "blocks": 2560, 00:32:15.243 "percent": 32 00:32:15.243 } 00:32:15.243 }, 00:32:15.243 "base_bdevs_list": [ 00:32:15.243 { 00:32:15.243 "name": "spare", 00:32:15.243 "uuid": "f73c754b-9210-512f-afa8-5d97c89b9662", 00:32:15.243 "is_configured": true, 00:32:15.243 "data_offset": 256, 00:32:15.243 "data_size": 7936 00:32:15.243 }, 00:32:15.243 { 00:32:15.243 "name": "BaseBdev2", 00:32:15.243 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:15.243 "is_configured": true, 00:32:15.243 "data_offset": 256, 00:32:15.243 "data_size": 7936 00:32:15.243 } 00:32:15.243 ] 00:32:15.243 }' 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.243 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:15.243 [2024-11-20 07:29:39.500006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:15.502 [2024-11-20 07:29:39.534926] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:15.502 [2024-11-20 07:29:39.535052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:15.502 [2024-11-20 07:29:39.535079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:15.502 [2024-11-20 07:29:39.535090] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:15.502 "name": "raid_bdev1", 00:32:15.502 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:15.502 "strip_size_kb": 0, 00:32:15.502 "state": "online", 00:32:15.502 "raid_level": "raid1", 00:32:15.502 "superblock": true, 00:32:15.502 "num_base_bdevs": 2, 00:32:15.502 "num_base_bdevs_discovered": 1, 00:32:15.502 "num_base_bdevs_operational": 1, 00:32:15.502 "base_bdevs_list": [ 00:32:15.502 { 00:32:15.502 "name": null, 00:32:15.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.502 "is_configured": false, 00:32:15.502 "data_offset": 0, 00:32:15.502 "data_size": 7936 00:32:15.502 }, 00:32:15.502 { 00:32:15.502 "name": "BaseBdev2", 00:32:15.502 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:15.502 "is_configured": true, 00:32:15.502 "data_offset": 256, 00:32:15.502 "data_size": 7936 00:32:15.502 } 00:32:15.502 ] 00:32:15.502 }' 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:15.502 07:29:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:16.069 "name": "raid_bdev1", 00:32:16.069 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:16.069 "strip_size_kb": 0, 00:32:16.069 "state": "online", 00:32:16.069 "raid_level": "raid1", 00:32:16.069 "superblock": true, 00:32:16.069 "num_base_bdevs": 2, 00:32:16.069 "num_base_bdevs_discovered": 1, 00:32:16.069 "num_base_bdevs_operational": 1, 00:32:16.069 "base_bdevs_list": [ 00:32:16.069 { 00:32:16.069 "name": null, 00:32:16.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.069 "is_configured": false, 00:32:16.069 "data_offset": 0, 00:32:16.069 "data_size": 7936 00:32:16.069 }, 00:32:16.069 { 00:32:16.069 "name": "BaseBdev2", 00:32:16.069 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:16.069 "is_configured": true, 00:32:16.069 "data_offset": 256, 00:32:16.069 "data_size": 7936 00:32:16.069 } 00:32:16.069 ] 00:32:16.069 }' 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.069 [2024-11-20 07:29:40.267287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:16.069 [2024-11-20 07:29:40.267357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.069 [2024-11-20 07:29:40.267406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:16.069 [2024-11-20 07:29:40.267431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.069 [2024-11-20 07:29:40.268096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.069 [2024-11-20 07:29:40.268138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:16.069 [2024-11-20 07:29:40.268262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:16.069 [2024-11-20 07:29:40.268282] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:16.069 [2024-11-20 07:29:40.268296] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:16.069 [2024-11-20 07:29:40.268313] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:16.069 BaseBdev1 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.069 07:29:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.005 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:17.263 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.263 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:17.263 "name": "raid_bdev1", 00:32:17.263 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:17.263 "strip_size_kb": 0, 00:32:17.263 "state": "online", 00:32:17.263 "raid_level": "raid1", 00:32:17.263 "superblock": true, 00:32:17.263 "num_base_bdevs": 2, 00:32:17.263 "num_base_bdevs_discovered": 1, 00:32:17.263 "num_base_bdevs_operational": 1, 00:32:17.263 "base_bdevs_list": [ 00:32:17.263 { 00:32:17.263 "name": null, 00:32:17.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.263 "is_configured": false, 00:32:17.263 "data_offset": 0, 00:32:17.263 "data_size": 7936 00:32:17.263 }, 00:32:17.263 { 00:32:17.263 "name": "BaseBdev2", 00:32:17.263 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:17.263 "is_configured": true, 00:32:17.263 "data_offset": 256, 00:32:17.263 "data_size": 7936 00:32:17.263 } 00:32:17.263 ] 00:32:17.263 }' 00:32:17.263 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:17.263 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.523 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:17.783 "name": "raid_bdev1", 00:32:17.783 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:17.783 "strip_size_kb": 0, 00:32:17.783 "state": "online", 00:32:17.783 "raid_level": "raid1", 00:32:17.783 "superblock": true, 00:32:17.783 "num_base_bdevs": 2, 00:32:17.783 "num_base_bdevs_discovered": 1, 00:32:17.783 "num_base_bdevs_operational": 1, 00:32:17.783 "base_bdevs_list": [ 00:32:17.783 { 00:32:17.783 "name": null, 00:32:17.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.783 "is_configured": false, 00:32:17.783 "data_offset": 0, 00:32:17.783 "data_size": 7936 00:32:17.783 }, 00:32:17.783 { 00:32:17.783 "name": "BaseBdev2", 00:32:17.783 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:17.783 "is_configured": true, 00:32:17.783 "data_offset": 256, 00:32:17.783 "data_size": 7936 00:32:17.783 } 00:32:17.783 ] 00:32:17.783 }' 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:17.783 [2024-11-20 07:29:41.960009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:17.783 [2024-11-20 07:29:41.960207] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:17.783 [2024-11-20 07:29:41.960228] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:17.783 request: 00:32:17.783 { 00:32:17.783 "base_bdev": "BaseBdev1", 00:32:17.783 "raid_bdev": "raid_bdev1", 00:32:17.783 "method": "bdev_raid_add_base_bdev", 00:32:17.783 "req_id": 1 00:32:17.783 } 00:32:17.783 Got JSON-RPC error response 00:32:17.783 response: 00:32:17.783 { 00:32:17.783 "code": -22, 00:32:17.783 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:17.783 } 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:17.783 07:29:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.726 07:29:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.984 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:18.984 "name": "raid_bdev1", 00:32:18.984 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:18.984 "strip_size_kb": 0, 00:32:18.984 "state": "online", 00:32:18.984 "raid_level": "raid1", 00:32:18.984 "superblock": true, 00:32:18.984 "num_base_bdevs": 2, 00:32:18.984 "num_base_bdevs_discovered": 1, 00:32:18.984 "num_base_bdevs_operational": 1, 00:32:18.984 "base_bdevs_list": [ 00:32:18.984 { 00:32:18.984 "name": null, 00:32:18.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.984 "is_configured": false, 00:32:18.984 "data_offset": 0, 00:32:18.984 "data_size": 7936 00:32:18.984 }, 00:32:18.984 { 00:32:18.984 "name": "BaseBdev2", 00:32:18.984 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:18.984 "is_configured": true, 00:32:18.984 "data_offset": 256, 00:32:18.984 "data_size": 7936 00:32:18.984 } 00:32:18.984 ] 00:32:18.984 }' 00:32:18.984 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:18.984 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:19.243 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:19.502 "name": "raid_bdev1", 00:32:19.502 "uuid": "45decee8-4fa3-477b-ab68-6abb6775afa3", 00:32:19.502 "strip_size_kb": 0, 00:32:19.502 "state": "online", 00:32:19.502 "raid_level": "raid1", 00:32:19.502 "superblock": true, 00:32:19.502 "num_base_bdevs": 2, 00:32:19.502 "num_base_bdevs_discovered": 1, 00:32:19.502 "num_base_bdevs_operational": 1, 00:32:19.502 "base_bdevs_list": [ 00:32:19.502 { 00:32:19.502 "name": null, 00:32:19.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.502 "is_configured": false, 00:32:19.502 "data_offset": 0, 00:32:19.502 "data_size": 7936 00:32:19.502 }, 00:32:19.502 { 00:32:19.502 "name": "BaseBdev2", 00:32:19.502 "uuid": "c85574f2-9ae8-50b8-ae8f-78d88c5a89cf", 00:32:19.502 "is_configured": true, 00:32:19.502 "data_offset": 256, 00:32:19.502 "data_size": 7936 00:32:19.502 } 00:32:19.502 ] 00:32:19.502 }' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87109 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87109 ']' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87109 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87109 00:32:19.502 killing process with pid 87109 00:32:19.502 Received shutdown signal, test time was about 60.000000 seconds 00:32:19.502 00:32:19.502 Latency(us) 00:32:19.502 [2024-11-20T07:29:43.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.502 [2024-11-20T07:29:43.791Z] =================================================================================================================== 00:32:19.502 [2024-11-20T07:29:43.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87109' 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87109 00:32:19.502 [2024-11-20 07:29:43.705059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:19.502 07:29:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87109 00:32:19.502 [2024-11-20 07:29:43.705212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:19.502 [2024-11-20 07:29:43.705289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:19.502 [2024-11-20 07:29:43.705307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:19.761 [2024-11-20 07:29:43.939621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:21.136 ************************************ 00:32:21.136 END TEST raid_rebuild_test_sb_4k 00:32:21.136 07:29:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:32:21.136 00:32:21.136 real 0m21.669s 00:32:21.136 user 0m29.433s 00:32:21.136 sys 0m2.536s 00:32:21.136 07:29:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.136 07:29:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:21.136 ************************************ 00:32:21.136 07:29:45 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:32:21.136 07:29:45 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:32:21.136 07:29:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:21.136 07:29:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.136 07:29:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:21.136 ************************************ 00:32:21.136 START TEST raid_state_function_test_sb_md_separate 00:32:21.136 ************************************ 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:21.136 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87809 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87809' 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:21.137 Process raid pid: 87809 00:32:21.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87809 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87809 ']' 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.137 07:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:21.137 [2024-11-20 07:29:45.160725] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:21.137 [2024-11-20 07:29:45.161165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.137 [2024-11-20 07:29:45.347797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.395 [2024-11-20 07:29:45.482666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.654 [2024-11-20 07:29:45.700185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:21.654 [2024-11-20 07:29:45.700232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:21.914 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.914 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:32:21.914 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:21.914 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.914 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:21.914 [2024-11-20 07:29:46.196349] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:21.914 [2024-11-20 07:29:46.196556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:21.914 [2024-11-20 07:29:46.196598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:21.914 [2024-11-20 07:29:46.196620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.172 "name": "Existed_Raid", 00:32:22.172 "uuid": "bedee99d-060a-49f5-ab63-481bcbd109c5", 00:32:22.172 "strip_size_kb": 0, 00:32:22.172 "state": "configuring", 00:32:22.172 "raid_level": "raid1", 00:32:22.172 "superblock": true, 00:32:22.172 "num_base_bdevs": 2, 00:32:22.172 "num_base_bdevs_discovered": 0, 00:32:22.172 "num_base_bdevs_operational": 2, 00:32:22.172 "base_bdevs_list": [ 00:32:22.172 { 00:32:22.172 "name": "BaseBdev1", 00:32:22.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.172 "is_configured": false, 00:32:22.172 "data_offset": 0, 00:32:22.172 "data_size": 0 00:32:22.172 }, 00:32:22.172 { 00:32:22.172 "name": "BaseBdev2", 00:32:22.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.172 "is_configured": false, 00:32:22.172 "data_offset": 0, 00:32:22.172 "data_size": 0 00:32:22.172 } 00:32:22.172 ] 00:32:22.172 }' 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.172 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 [2024-11-20 07:29:46.728458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:22.739 [2024-11-20 07:29:46.728508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 [2024-11-20 07:29:46.740452] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:22.739 [2024-11-20 07:29:46.740511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:22.739 [2024-11-20 07:29:46.740526] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:22.739 [2024-11-20 07:29:46.740545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 [2024-11-20 07:29:46.791930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:22.739 BaseBdev1 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.739 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.739 [ 00:32:22.739 { 00:32:22.739 "name": "BaseBdev1", 00:32:22.739 "aliases": [ 00:32:22.739 "e28cab42-b6ab-4668-87a1-23f40c04343e" 00:32:22.739 ], 00:32:22.739 "product_name": "Malloc disk", 00:32:22.739 "block_size": 4096, 00:32:22.739 "num_blocks": 8192, 00:32:22.739 "uuid": "e28cab42-b6ab-4668-87a1-23f40c04343e", 00:32:22.739 "md_size": 32, 00:32:22.739 "md_interleave": false, 00:32:22.739 "dif_type": 0, 00:32:22.739 "assigned_rate_limits": { 00:32:22.739 "rw_ios_per_sec": 0, 00:32:22.739 "rw_mbytes_per_sec": 0, 00:32:22.739 "r_mbytes_per_sec": 0, 00:32:22.739 "w_mbytes_per_sec": 0 00:32:22.739 }, 00:32:22.739 "claimed": true, 00:32:22.739 "claim_type": "exclusive_write", 00:32:22.739 "zoned": false, 00:32:22.739 "supported_io_types": { 00:32:22.739 "read": true, 00:32:22.739 "write": true, 00:32:22.739 "unmap": true, 00:32:22.739 "flush": true, 00:32:22.739 "reset": true, 00:32:22.739 "nvme_admin": false, 00:32:22.739 "nvme_io": false, 00:32:22.739 "nvme_io_md": false, 00:32:22.739 "write_zeroes": true, 00:32:22.739 "zcopy": true, 00:32:22.739 "get_zone_info": false, 00:32:22.739 "zone_management": false, 00:32:22.739 "zone_append": false, 00:32:22.739 "compare": false, 00:32:22.739 "compare_and_write": false, 00:32:22.739 "abort": true, 00:32:22.739 "seek_hole": false, 00:32:22.739 "seek_data": false, 00:32:22.739 "copy": true, 00:32:22.739 "nvme_iov_md": false 00:32:22.739 }, 00:32:22.739 "memory_domains": [ 00:32:22.739 { 00:32:22.739 "dma_device_id": "system", 00:32:22.739 "dma_device_type": 1 00:32:22.739 }, 00:32:22.739 { 00:32:22.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.740 "dma_device_type": 2 00:32:22.740 } 00:32:22.740 ], 00:32:22.740 "driver_specific": {} 00:32:22.740 } 00:32:22.740 ] 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.740 "name": "Existed_Raid", 00:32:22.740 "uuid": "fc978534-ec02-46bf-9dd3-34c329371e14", 00:32:22.740 "strip_size_kb": 0, 00:32:22.740 "state": "configuring", 00:32:22.740 "raid_level": "raid1", 00:32:22.740 "superblock": true, 00:32:22.740 "num_base_bdevs": 2, 00:32:22.740 "num_base_bdevs_discovered": 1, 00:32:22.740 "num_base_bdevs_operational": 2, 00:32:22.740 "base_bdevs_list": [ 00:32:22.740 { 00:32:22.740 "name": "BaseBdev1", 00:32:22.740 "uuid": "e28cab42-b6ab-4668-87a1-23f40c04343e", 00:32:22.740 "is_configured": true, 00:32:22.740 "data_offset": 256, 00:32:22.740 "data_size": 7936 00:32:22.740 }, 00:32:22.740 { 00:32:22.740 "name": "BaseBdev2", 00:32:22.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.740 "is_configured": false, 00:32:22.740 "data_offset": 0, 00:32:22.740 "data_size": 0 00:32:22.740 } 00:32:22.740 ] 00:32:22.740 }' 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.740 07:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.306 [2024-11-20 07:29:47.344354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:23.306 [2024-11-20 07:29:47.344416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.306 [2024-11-20 07:29:47.352389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:23.306 [2024-11-20 07:29:47.354940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:23.306 [2024-11-20 07:29:47.354998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.306 "name": "Existed_Raid", 00:32:23.306 "uuid": "d2935d27-f5eb-4521-8d17-c9513ab5ff2b", 00:32:23.306 "strip_size_kb": 0, 00:32:23.306 "state": "configuring", 00:32:23.306 "raid_level": "raid1", 00:32:23.306 "superblock": true, 00:32:23.306 "num_base_bdevs": 2, 00:32:23.306 "num_base_bdevs_discovered": 1, 00:32:23.306 "num_base_bdevs_operational": 2, 00:32:23.306 "base_bdevs_list": [ 00:32:23.306 { 00:32:23.306 "name": "BaseBdev1", 00:32:23.306 "uuid": "e28cab42-b6ab-4668-87a1-23f40c04343e", 00:32:23.306 "is_configured": true, 00:32:23.306 "data_offset": 256, 00:32:23.306 "data_size": 7936 00:32:23.306 }, 00:32:23.306 { 00:32:23.306 "name": "BaseBdev2", 00:32:23.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.306 "is_configured": false, 00:32:23.306 "data_offset": 0, 00:32:23.306 "data_size": 0 00:32:23.306 } 00:32:23.306 ] 00:32:23.306 }' 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.306 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.874 [2024-11-20 07:29:47.910342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:23.874 [2024-11-20 07:29:47.910644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:23.874 [2024-11-20 07:29:47.910664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:23.874 [2024-11-20 07:29:47.910772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:23.874 BaseBdev2 00:32:23.874 [2024-11-20 07:29:47.910946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:23.874 [2024-11-20 07:29:47.910965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:23.874 [2024-11-20 07:29:47.911094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.874 [ 00:32:23.874 { 00:32:23.874 "name": "BaseBdev2", 00:32:23.874 "aliases": [ 00:32:23.874 "45ac0e7b-1c4e-425e-ab96-e8851ac43484" 00:32:23.874 ], 00:32:23.874 "product_name": "Malloc disk", 00:32:23.874 "block_size": 4096, 00:32:23.874 "num_blocks": 8192, 00:32:23.874 "uuid": "45ac0e7b-1c4e-425e-ab96-e8851ac43484", 00:32:23.874 "md_size": 32, 00:32:23.874 "md_interleave": false, 00:32:23.874 "dif_type": 0, 00:32:23.874 "assigned_rate_limits": { 00:32:23.874 "rw_ios_per_sec": 0, 00:32:23.874 "rw_mbytes_per_sec": 0, 00:32:23.874 "r_mbytes_per_sec": 0, 00:32:23.874 "w_mbytes_per_sec": 0 00:32:23.874 }, 00:32:23.874 "claimed": true, 00:32:23.874 "claim_type": "exclusive_write", 00:32:23.874 "zoned": false, 00:32:23.874 "supported_io_types": { 00:32:23.874 "read": true, 00:32:23.874 "write": true, 00:32:23.874 "unmap": true, 00:32:23.874 "flush": true, 00:32:23.874 "reset": true, 00:32:23.874 "nvme_admin": false, 00:32:23.874 "nvme_io": false, 00:32:23.874 "nvme_io_md": false, 00:32:23.874 "write_zeroes": true, 00:32:23.874 "zcopy": true, 00:32:23.874 "get_zone_info": false, 00:32:23.874 "zone_management": false, 00:32:23.874 "zone_append": false, 00:32:23.874 "compare": false, 00:32:23.874 "compare_and_write": false, 00:32:23.874 "abort": true, 00:32:23.874 "seek_hole": false, 00:32:23.874 "seek_data": false, 00:32:23.874 "copy": true, 00:32:23.874 "nvme_iov_md": false 00:32:23.874 }, 00:32:23.874 "memory_domains": [ 00:32:23.874 { 00:32:23.874 "dma_device_id": "system", 00:32:23.874 "dma_device_type": 1 00:32:23.874 }, 00:32:23.874 { 00:32:23.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.874 "dma_device_type": 2 00:32:23.874 } 00:32:23.874 ], 00:32:23.874 "driver_specific": {} 00:32:23.874 } 00:32:23.874 ] 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.874 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.875 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.875 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.875 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.875 "name": "Existed_Raid", 00:32:23.875 "uuid": "d2935d27-f5eb-4521-8d17-c9513ab5ff2b", 00:32:23.875 "strip_size_kb": 0, 00:32:23.875 "state": "online", 00:32:23.875 "raid_level": "raid1", 00:32:23.875 "superblock": true, 00:32:23.875 "num_base_bdevs": 2, 00:32:23.875 "num_base_bdevs_discovered": 2, 00:32:23.875 "num_base_bdevs_operational": 2, 00:32:23.875 "base_bdevs_list": [ 00:32:23.875 { 00:32:23.875 "name": "BaseBdev1", 00:32:23.875 "uuid": "e28cab42-b6ab-4668-87a1-23f40c04343e", 00:32:23.875 "is_configured": true, 00:32:23.875 "data_offset": 256, 00:32:23.875 "data_size": 7936 00:32:23.875 }, 00:32:23.875 { 00:32:23.875 "name": "BaseBdev2", 00:32:23.875 "uuid": "45ac0e7b-1c4e-425e-ab96-e8851ac43484", 00:32:23.875 "is_configured": true, 00:32:23.875 "data_offset": 256, 00:32:23.875 "data_size": 7936 00:32:23.875 } 00:32:23.875 ] 00:32:23.875 }' 00:32:23.875 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.875 07:29:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.441 [2024-11-20 07:29:48.475072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.441 "name": "Existed_Raid", 00:32:24.441 "aliases": [ 00:32:24.441 "d2935d27-f5eb-4521-8d17-c9513ab5ff2b" 00:32:24.441 ], 00:32:24.441 "product_name": "Raid Volume", 00:32:24.441 "block_size": 4096, 00:32:24.441 "num_blocks": 7936, 00:32:24.441 "uuid": "d2935d27-f5eb-4521-8d17-c9513ab5ff2b", 00:32:24.441 "md_size": 32, 00:32:24.441 "md_interleave": false, 00:32:24.441 "dif_type": 0, 00:32:24.441 "assigned_rate_limits": { 00:32:24.441 "rw_ios_per_sec": 0, 00:32:24.441 "rw_mbytes_per_sec": 0, 00:32:24.441 "r_mbytes_per_sec": 0, 00:32:24.441 "w_mbytes_per_sec": 0 00:32:24.441 }, 00:32:24.441 "claimed": false, 00:32:24.441 "zoned": false, 00:32:24.441 "supported_io_types": { 00:32:24.441 "read": true, 00:32:24.441 "write": true, 00:32:24.441 "unmap": false, 00:32:24.441 "flush": false, 00:32:24.441 "reset": true, 00:32:24.441 "nvme_admin": false, 00:32:24.441 "nvme_io": false, 00:32:24.441 "nvme_io_md": false, 00:32:24.441 "write_zeroes": true, 00:32:24.441 "zcopy": false, 00:32:24.441 "get_zone_info": false, 00:32:24.441 "zone_management": false, 00:32:24.441 "zone_append": false, 00:32:24.441 "compare": false, 00:32:24.441 "compare_and_write": false, 00:32:24.441 "abort": false, 00:32:24.441 "seek_hole": false, 00:32:24.441 "seek_data": false, 00:32:24.441 "copy": false, 00:32:24.441 "nvme_iov_md": false 00:32:24.441 }, 00:32:24.441 "memory_domains": [ 00:32:24.441 { 00:32:24.441 "dma_device_id": "system", 00:32:24.441 "dma_device_type": 1 00:32:24.441 }, 00:32:24.441 { 00:32:24.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.441 "dma_device_type": 2 00:32:24.441 }, 00:32:24.441 { 00:32:24.441 "dma_device_id": "system", 00:32:24.441 "dma_device_type": 1 00:32:24.441 }, 00:32:24.441 { 00:32:24.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.441 "dma_device_type": 2 00:32:24.441 } 00:32:24.441 ], 00:32:24.441 "driver_specific": { 00:32:24.441 "raid": { 00:32:24.441 "uuid": "d2935d27-f5eb-4521-8d17-c9513ab5ff2b", 00:32:24.441 "strip_size_kb": 0, 00:32:24.441 "state": "online", 00:32:24.441 "raid_level": "raid1", 00:32:24.441 "superblock": true, 00:32:24.441 "num_base_bdevs": 2, 00:32:24.441 "num_base_bdevs_discovered": 2, 00:32:24.441 "num_base_bdevs_operational": 2, 00:32:24.441 "base_bdevs_list": [ 00:32:24.441 { 00:32:24.441 "name": "BaseBdev1", 00:32:24.441 "uuid": "e28cab42-b6ab-4668-87a1-23f40c04343e", 00:32:24.441 "is_configured": true, 00:32:24.441 "data_offset": 256, 00:32:24.441 "data_size": 7936 00:32:24.441 }, 00:32:24.441 { 00:32:24.441 "name": "BaseBdev2", 00:32:24.441 "uuid": "45ac0e7b-1c4e-425e-ab96-e8851ac43484", 00:32:24.441 "is_configured": true, 00:32:24.441 "data_offset": 256, 00:32:24.441 "data_size": 7936 00:32:24.441 } 00:32:24.441 ] 00:32:24.441 } 00:32:24.441 } 00:32:24.441 }' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:24.441 BaseBdev2' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.441 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.442 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:24.442 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.700 [2024-11-20 07:29:48.762822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:24.700 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.701 "name": "Existed_Raid", 00:32:24.701 "uuid": "d2935d27-f5eb-4521-8d17-c9513ab5ff2b", 00:32:24.701 "strip_size_kb": 0, 00:32:24.701 "state": "online", 00:32:24.701 "raid_level": "raid1", 00:32:24.701 "superblock": true, 00:32:24.701 "num_base_bdevs": 2, 00:32:24.701 "num_base_bdevs_discovered": 1, 00:32:24.701 "num_base_bdevs_operational": 1, 00:32:24.701 "base_bdevs_list": [ 00:32:24.701 { 00:32:24.701 "name": null, 00:32:24.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.701 "is_configured": false, 00:32:24.701 "data_offset": 0, 00:32:24.701 "data_size": 7936 00:32:24.701 }, 00:32:24.701 { 00:32:24.701 "name": "BaseBdev2", 00:32:24.701 "uuid": "45ac0e7b-1c4e-425e-ab96-e8851ac43484", 00:32:24.701 "is_configured": true, 00:32:24.701 "data_offset": 256, 00:32:24.701 "data_size": 7936 00:32:24.701 } 00:32:24.701 ] 00:32:24.701 }' 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.701 07:29:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.267 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.267 [2024-11-20 07:29:49.463827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:25.267 [2024-11-20 07:29:49.463956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:25.525 [2024-11-20 07:29:49.559490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:25.526 [2024-11-20 07:29:49.559794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:25.526 [2024-11-20 07:29:49.560042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87809 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87809 ']' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87809 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87809 00:32:25.526 killing process with pid 87809 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87809' 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87809 00:32:25.526 [2024-11-20 07:29:49.647043] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:25.526 07:29:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87809 00:32:25.526 [2024-11-20 07:29:49.662850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:26.492 07:29:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:32:26.492 00:32:26.492 real 0m5.717s 00:32:26.492 user 0m8.620s 00:32:26.492 sys 0m0.820s 00:32:26.492 07:29:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.492 ************************************ 00:32:26.492 END TEST raid_state_function_test_sb_md_separate 00:32:26.492 ************************************ 00:32:26.492 07:29:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.752 07:29:50 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:32:26.752 07:29:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:26.752 07:29:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.752 07:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:26.752 ************************************ 00:32:26.752 START TEST raid_superblock_test_md_separate 00:32:26.752 ************************************ 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88067 00:32:26.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88067 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88067 ']' 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.752 07:29:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.752 [2024-11-20 07:29:50.934622] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:26.752 [2024-11-20 07:29:50.934812] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88067 ] 00:32:27.011 [2024-11-20 07:29:51.116188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.011 [2024-11-20 07:29:51.261784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.270 [2024-11-20 07:29:51.478774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:27.270 [2024-11-20 07:29:51.478821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 malloc1 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 [2024-11-20 07:29:51.969897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:27.839 [2024-11-20 07:29:51.970106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.839 [2024-11-20 07:29:51.970190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:27.839 [2024-11-20 07:29:51.970423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.839 [2024-11-20 07:29:51.973182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.839 [2024-11-20 07:29:51.973352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:27.839 pt1 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.839 07:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 malloc2 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 [2024-11-20 07:29:52.030971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:27.839 [2024-11-20 07:29:52.031175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.839 [2024-11-20 07:29:52.031254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:27.839 [2024-11-20 07:29:52.031375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.839 [2024-11-20 07:29:52.033923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.839 [2024-11-20 07:29:52.034085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:27.839 pt2 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 [2024-11-20 07:29:52.043083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:27.839 [2024-11-20 07:29:52.045712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:27.839 [2024-11-20 07:29:52.046079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:27.839 [2024-11-20 07:29:52.046107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:27.839 [2024-11-20 07:29:52.046208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:27.839 [2024-11-20 07:29:52.046373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:27.839 [2024-11-20 07:29:52.046394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:27.839 [2024-11-20 07:29:52.046541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.839 "name": "raid_bdev1", 00:32:27.839 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:27.839 "strip_size_kb": 0, 00:32:27.839 "state": "online", 00:32:27.839 "raid_level": "raid1", 00:32:27.839 "superblock": true, 00:32:27.839 "num_base_bdevs": 2, 00:32:27.839 "num_base_bdevs_discovered": 2, 00:32:27.839 "num_base_bdevs_operational": 2, 00:32:27.839 "base_bdevs_list": [ 00:32:27.839 { 00:32:27.839 "name": "pt1", 00:32:27.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:27.839 "is_configured": true, 00:32:27.839 "data_offset": 256, 00:32:27.839 "data_size": 7936 00:32:27.839 }, 00:32:27.839 { 00:32:27.839 "name": "pt2", 00:32:27.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:27.839 "is_configured": true, 00:32:27.839 "data_offset": 256, 00:32:27.839 "data_size": 7936 00:32:27.839 } 00:32:27.839 ] 00:32:27.839 }' 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.839 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.407 [2024-11-20 07:29:52.567755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.407 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:28.407 "name": "raid_bdev1", 00:32:28.407 "aliases": [ 00:32:28.407 "fec1e6d4-f1ab-4a6e-ac46-158478708872" 00:32:28.407 ], 00:32:28.407 "product_name": "Raid Volume", 00:32:28.407 "block_size": 4096, 00:32:28.407 "num_blocks": 7936, 00:32:28.407 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:28.407 "md_size": 32, 00:32:28.407 "md_interleave": false, 00:32:28.407 "dif_type": 0, 00:32:28.407 "assigned_rate_limits": { 00:32:28.407 "rw_ios_per_sec": 0, 00:32:28.407 "rw_mbytes_per_sec": 0, 00:32:28.407 "r_mbytes_per_sec": 0, 00:32:28.407 "w_mbytes_per_sec": 0 00:32:28.407 }, 00:32:28.407 "claimed": false, 00:32:28.407 "zoned": false, 00:32:28.407 "supported_io_types": { 00:32:28.407 "read": true, 00:32:28.407 "write": true, 00:32:28.407 "unmap": false, 00:32:28.407 "flush": false, 00:32:28.407 "reset": true, 00:32:28.407 "nvme_admin": false, 00:32:28.407 "nvme_io": false, 00:32:28.407 "nvme_io_md": false, 00:32:28.407 "write_zeroes": true, 00:32:28.407 "zcopy": false, 00:32:28.407 "get_zone_info": false, 00:32:28.407 "zone_management": false, 00:32:28.407 "zone_append": false, 00:32:28.407 "compare": false, 00:32:28.407 "compare_and_write": false, 00:32:28.407 "abort": false, 00:32:28.407 "seek_hole": false, 00:32:28.407 "seek_data": false, 00:32:28.407 "copy": false, 00:32:28.407 "nvme_iov_md": false 00:32:28.407 }, 00:32:28.407 "memory_domains": [ 00:32:28.407 { 00:32:28.407 "dma_device_id": "system", 00:32:28.407 "dma_device_type": 1 00:32:28.407 }, 00:32:28.407 { 00:32:28.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.407 "dma_device_type": 2 00:32:28.407 }, 00:32:28.407 { 00:32:28.407 "dma_device_id": "system", 00:32:28.407 "dma_device_type": 1 00:32:28.407 }, 00:32:28.407 { 00:32:28.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.407 "dma_device_type": 2 00:32:28.407 } 00:32:28.407 ], 00:32:28.407 "driver_specific": { 00:32:28.407 "raid": { 00:32:28.407 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:28.407 "strip_size_kb": 0, 00:32:28.407 "state": "online", 00:32:28.407 "raid_level": "raid1", 00:32:28.407 "superblock": true, 00:32:28.407 "num_base_bdevs": 2, 00:32:28.407 "num_base_bdevs_discovered": 2, 00:32:28.407 "num_base_bdevs_operational": 2, 00:32:28.407 "base_bdevs_list": [ 00:32:28.407 { 00:32:28.407 "name": "pt1", 00:32:28.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:28.407 "is_configured": true, 00:32:28.407 "data_offset": 256, 00:32:28.407 "data_size": 7936 00:32:28.407 }, 00:32:28.407 { 00:32:28.408 "name": "pt2", 00:32:28.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:28.408 "is_configured": true, 00:32:28.408 "data_offset": 256, 00:32:28.408 "data_size": 7936 00:32:28.408 } 00:32:28.408 ] 00:32:28.408 } 00:32:28.408 } 00:32:28.408 }' 00:32:28.408 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:28.408 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:28.408 pt2' 00:32:28.408 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 [2024-11-20 07:29:52.831631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fec1e6d4-f1ab-4a6e-ac46-158478708872 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z fec1e6d4-f1ab-4a6e-ac46-158478708872 ']' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 [2024-11-20 07:29:52.883283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:28.667 [2024-11-20 07:29:52.883317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:28.667 [2024-11-20 07:29:52.883458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:28.667 [2024-11-20 07:29:52.883532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:28.667 [2024-11-20 07:29:52.883551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:28.667 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.668 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.927 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.927 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:28.927 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.927 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.928 07:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:28.928 07:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.928 [2024-11-20 07:29:53.031370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:28.928 [2024-11-20 07:29:53.033821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:28.928 [2024-11-20 07:29:53.033922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:28.928 [2024-11-20 07:29:53.034048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:28.928 [2024-11-20 07:29:53.034073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:28.928 [2024-11-20 07:29:53.034087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:28.928 request: 00:32:28.928 { 00:32:28.928 "name": "raid_bdev1", 00:32:28.928 "raid_level": "raid1", 00:32:28.928 "base_bdevs": [ 00:32:28.928 "malloc1", 00:32:28.928 "malloc2" 00:32:28.928 ], 00:32:28.928 "superblock": false, 00:32:28.928 "method": "bdev_raid_create", 00:32:28.928 "req_id": 1 00:32:28.928 } 00:32:28.928 Got JSON-RPC error response 00:32:28.928 response: 00:32:28.928 { 00:32:28.928 "code": -17, 00:32:28.928 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:28.928 } 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.928 [2024-11-20 07:29:53.095298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:28.928 [2024-11-20 07:29:53.095376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:28.928 [2024-11-20 07:29:53.095415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:28.928 [2024-11-20 07:29:53.095430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:28.928 [2024-11-20 07:29:53.097921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:28.928 [2024-11-20 07:29:53.097999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:28.928 [2024-11-20 07:29:53.098051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:28.928 [2024-11-20 07:29:53.098114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:28.928 pt1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:28.928 "name": "raid_bdev1", 00:32:28.928 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:28.928 "strip_size_kb": 0, 00:32:28.928 "state": "configuring", 00:32:28.928 "raid_level": "raid1", 00:32:28.928 "superblock": true, 00:32:28.928 "num_base_bdevs": 2, 00:32:28.928 "num_base_bdevs_discovered": 1, 00:32:28.928 "num_base_bdevs_operational": 2, 00:32:28.928 "base_bdevs_list": [ 00:32:28.928 { 00:32:28.928 "name": "pt1", 00:32:28.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:28.928 "is_configured": true, 00:32:28.928 "data_offset": 256, 00:32:28.928 "data_size": 7936 00:32:28.928 }, 00:32:28.928 { 00:32:28.928 "name": null, 00:32:28.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:28.928 "is_configured": false, 00:32:28.928 "data_offset": 256, 00:32:28.928 "data_size": 7936 00:32:28.928 } 00:32:28.928 ] 00:32:28.928 }' 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:28.928 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.496 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.497 [2024-11-20 07:29:53.623463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:29.497 [2024-11-20 07:29:53.623721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.497 [2024-11-20 07:29:53.623760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:29.497 [2024-11-20 07:29:53.623778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.497 [2024-11-20 07:29:53.624043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.497 [2024-11-20 07:29:53.624102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:29.497 [2024-11-20 07:29:53.624155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:29.497 [2024-11-20 07:29:53.624195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:29.497 [2024-11-20 07:29:53.624318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:29.497 [2024-11-20 07:29:53.624337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:29.497 [2024-11-20 07:29:53.624411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:29.497 [2024-11-20 07:29:53.624545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:29.497 [2024-11-20 07:29:53.624559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:29.497 [2024-11-20 07:29:53.624756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:29.497 pt2 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:29.497 "name": "raid_bdev1", 00:32:29.497 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:29.497 "strip_size_kb": 0, 00:32:29.497 "state": "online", 00:32:29.497 "raid_level": "raid1", 00:32:29.497 "superblock": true, 00:32:29.497 "num_base_bdevs": 2, 00:32:29.497 "num_base_bdevs_discovered": 2, 00:32:29.497 "num_base_bdevs_operational": 2, 00:32:29.497 "base_bdevs_list": [ 00:32:29.497 { 00:32:29.497 "name": "pt1", 00:32:29.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:29.497 "is_configured": true, 00:32:29.497 "data_offset": 256, 00:32:29.497 "data_size": 7936 00:32:29.497 }, 00:32:29.497 { 00:32:29.497 "name": "pt2", 00:32:29.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:29.497 "is_configured": true, 00:32:29.497 "data_offset": 256, 00:32:29.497 "data_size": 7936 00:32:29.497 } 00:32:29.497 ] 00:32:29.497 }' 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:29.497 07:29:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.065 [2024-11-20 07:29:54.152183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:30.065 "name": "raid_bdev1", 00:32:30.065 "aliases": [ 00:32:30.065 "fec1e6d4-f1ab-4a6e-ac46-158478708872" 00:32:30.065 ], 00:32:30.065 "product_name": "Raid Volume", 00:32:30.065 "block_size": 4096, 00:32:30.065 "num_blocks": 7936, 00:32:30.065 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:30.065 "md_size": 32, 00:32:30.065 "md_interleave": false, 00:32:30.065 "dif_type": 0, 00:32:30.065 "assigned_rate_limits": { 00:32:30.065 "rw_ios_per_sec": 0, 00:32:30.065 "rw_mbytes_per_sec": 0, 00:32:30.065 "r_mbytes_per_sec": 0, 00:32:30.065 "w_mbytes_per_sec": 0 00:32:30.065 }, 00:32:30.065 "claimed": false, 00:32:30.065 "zoned": false, 00:32:30.065 "supported_io_types": { 00:32:30.065 "read": true, 00:32:30.065 "write": true, 00:32:30.065 "unmap": false, 00:32:30.065 "flush": false, 00:32:30.065 "reset": true, 00:32:30.065 "nvme_admin": false, 00:32:30.065 "nvme_io": false, 00:32:30.065 "nvme_io_md": false, 00:32:30.065 "write_zeroes": true, 00:32:30.065 "zcopy": false, 00:32:30.065 "get_zone_info": false, 00:32:30.065 "zone_management": false, 00:32:30.065 "zone_append": false, 00:32:30.065 "compare": false, 00:32:30.065 "compare_and_write": false, 00:32:30.065 "abort": false, 00:32:30.065 "seek_hole": false, 00:32:30.065 "seek_data": false, 00:32:30.065 "copy": false, 00:32:30.065 "nvme_iov_md": false 00:32:30.065 }, 00:32:30.065 "memory_domains": [ 00:32:30.065 { 00:32:30.065 "dma_device_id": "system", 00:32:30.065 "dma_device_type": 1 00:32:30.065 }, 00:32:30.065 { 00:32:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.065 "dma_device_type": 2 00:32:30.065 }, 00:32:30.065 { 00:32:30.065 "dma_device_id": "system", 00:32:30.065 "dma_device_type": 1 00:32:30.065 }, 00:32:30.065 { 00:32:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.065 "dma_device_type": 2 00:32:30.065 } 00:32:30.065 ], 00:32:30.065 "driver_specific": { 00:32:30.065 "raid": { 00:32:30.065 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:30.065 "strip_size_kb": 0, 00:32:30.065 "state": "online", 00:32:30.065 "raid_level": "raid1", 00:32:30.065 "superblock": true, 00:32:30.065 "num_base_bdevs": 2, 00:32:30.065 "num_base_bdevs_discovered": 2, 00:32:30.065 "num_base_bdevs_operational": 2, 00:32:30.065 "base_bdevs_list": [ 00:32:30.065 { 00:32:30.065 "name": "pt1", 00:32:30.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:30.065 "is_configured": true, 00:32:30.065 "data_offset": 256, 00:32:30.065 "data_size": 7936 00:32:30.065 }, 00:32:30.065 { 00:32:30.065 "name": "pt2", 00:32:30.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.065 "is_configured": true, 00:32:30.065 "data_offset": 256, 00:32:30.065 "data_size": 7936 00:32:30.065 } 00:32:30.065 ] 00:32:30.065 } 00:32:30.065 } 00:32:30.065 }' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:30.065 pt2' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.065 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.325 [2024-11-20 07:29:54.420226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' fec1e6d4-f1ab-4a6e-ac46-158478708872 '!=' fec1e6d4-f1ab-4a6e-ac46-158478708872 ']' 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.325 [2024-11-20 07:29:54.471944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.325 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.326 "name": "raid_bdev1", 00:32:30.326 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:30.326 "strip_size_kb": 0, 00:32:30.326 "state": "online", 00:32:30.326 "raid_level": "raid1", 00:32:30.326 "superblock": true, 00:32:30.326 "num_base_bdevs": 2, 00:32:30.326 "num_base_bdevs_discovered": 1, 00:32:30.326 "num_base_bdevs_operational": 1, 00:32:30.326 "base_bdevs_list": [ 00:32:30.326 { 00:32:30.326 "name": null, 00:32:30.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.326 "is_configured": false, 00:32:30.326 "data_offset": 0, 00:32:30.326 "data_size": 7936 00:32:30.326 }, 00:32:30.326 { 00:32:30.326 "name": "pt2", 00:32:30.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.326 "is_configured": true, 00:32:30.326 "data_offset": 256, 00:32:30.326 "data_size": 7936 00:32:30.326 } 00:32:30.326 ] 00:32:30.326 }' 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.326 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.894 [2024-11-20 07:29:54.992067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:30.894 [2024-11-20 07:29:54.992097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:30.894 [2024-11-20 07:29:54.992197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:30.894 [2024-11-20 07:29:54.992257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:30.894 [2024-11-20 07:29:54.992274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:32:30.894 07:29:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.894 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.894 [2024-11-20 07:29:55.068052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:30.894 [2024-11-20 07:29:55.068134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.894 [2024-11-20 07:29:55.068161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:30.895 [2024-11-20 07:29:55.068183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.895 [2024-11-20 07:29:55.071169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.895 [2024-11-20 07:29:55.071221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:30.895 [2024-11-20 07:29:55.071287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:30.895 [2024-11-20 07:29:55.071389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:30.895 [2024-11-20 07:29:55.071522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:30.895 [2024-11-20 07:29:55.071543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:30.895 [2024-11-20 07:29:55.071657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:30.895 [2024-11-20 07:29:55.071814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:30.895 [2024-11-20 07:29:55.071834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:30.895 [2024-11-20 07:29:55.072048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.895 pt2 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.895 "name": "raid_bdev1", 00:32:30.895 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:30.895 "strip_size_kb": 0, 00:32:30.895 "state": "online", 00:32:30.895 "raid_level": "raid1", 00:32:30.895 "superblock": true, 00:32:30.895 "num_base_bdevs": 2, 00:32:30.895 "num_base_bdevs_discovered": 1, 00:32:30.895 "num_base_bdevs_operational": 1, 00:32:30.895 "base_bdevs_list": [ 00:32:30.895 { 00:32:30.895 "name": null, 00:32:30.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.895 "is_configured": false, 00:32:30.895 "data_offset": 256, 00:32:30.895 "data_size": 7936 00:32:30.895 }, 00:32:30.895 { 00:32:30.895 "name": "pt2", 00:32:30.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.895 "is_configured": true, 00:32:30.895 "data_offset": 256, 00:32:30.895 "data_size": 7936 00:32:30.895 } 00:32:30.895 ] 00:32:30.895 }' 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.895 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.463 [2024-11-20 07:29:55.592253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:31.463 [2024-11-20 07:29:55.592288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:31.463 [2024-11-20 07:29:55.592375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:31.463 [2024-11-20 07:29:55.592460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:31.463 [2024-11-20 07:29:55.592474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.463 [2024-11-20 07:29:55.660316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:31.463 [2024-11-20 07:29:55.660405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:31.463 [2024-11-20 07:29:55.660436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:31.463 [2024-11-20 07:29:55.660450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:31.463 [2024-11-20 07:29:55.663267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:31.463 [2024-11-20 07:29:55.663313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:31.463 [2024-11-20 07:29:55.663425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:31.463 [2024-11-20 07:29:55.663507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:31.463 [2024-11-20 07:29:55.663739] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:31.463 [2024-11-20 07:29:55.663774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:31.463 [2024-11-20 07:29:55.663803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:31.463 [2024-11-20 07:29:55.663884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:31.463 [2024-11-20 07:29:55.664017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:31.463 [2024-11-20 07:29:55.664032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:31.463 [2024-11-20 07:29:55.664134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:31.463 [2024-11-20 07:29:55.664294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:31.463 [2024-11-20 07:29:55.664312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:31.463 [2024-11-20 07:29:55.664501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:31.463 pt1 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:32:31.463 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.464 "name": "raid_bdev1", 00:32:31.464 "uuid": "fec1e6d4-f1ab-4a6e-ac46-158478708872", 00:32:31.464 "strip_size_kb": 0, 00:32:31.464 "state": "online", 00:32:31.464 "raid_level": "raid1", 00:32:31.464 "superblock": true, 00:32:31.464 "num_base_bdevs": 2, 00:32:31.464 "num_base_bdevs_discovered": 1, 00:32:31.464 "num_base_bdevs_operational": 1, 00:32:31.464 "base_bdevs_list": [ 00:32:31.464 { 00:32:31.464 "name": null, 00:32:31.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.464 "is_configured": false, 00:32:31.464 "data_offset": 256, 00:32:31.464 "data_size": 7936 00:32:31.464 }, 00:32:31.464 { 00:32:31.464 "name": "pt2", 00:32:31.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:31.464 "is_configured": true, 00:32:31.464 "data_offset": 256, 00:32:31.464 "data_size": 7936 00:32:31.464 } 00:32:31.464 ] 00:32:31.464 }' 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.464 07:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:32.031 [2024-11-20 07:29:56.256874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' fec1e6d4-f1ab-4a6e-ac46-158478708872 '!=' fec1e6d4-f1ab-4a6e-ac46-158478708872 ']' 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88067 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88067 ']' 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88067 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.031 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88067 00:32:32.291 killing process with pid 88067 00:32:32.291 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.291 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.291 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88067' 00:32:32.291 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88067 00:32:32.291 [2024-11-20 07:29:56.335664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:32.291 [2024-11-20 07:29:56.335774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:32.291 07:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88067 00:32:32.291 [2024-11-20 07:29:56.335838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:32.291 [2024-11-20 07:29:56.335862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:32.291 [2024-11-20 07:29:56.507522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:33.228 ************************************ 00:32:33.228 07:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:32:33.228 00:32:33.228 real 0m6.586s 00:32:33.228 user 0m10.504s 00:32:33.228 sys 0m1.020s 00:32:33.228 07:29:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.228 07:29:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.228 END TEST raid_superblock_test_md_separate 00:32:33.228 ************************************ 00:32:33.228 07:29:57 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:32:33.228 07:29:57 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:32:33.228 07:29:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:33.228 07:29:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.228 07:29:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:33.228 ************************************ 00:32:33.228 START TEST raid_rebuild_test_sb_md_separate 00:32:33.228 ************************************ 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88391 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88391 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88391 ']' 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.228 07:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.487 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:33.487 Zero copy mechanism will not be used. 00:32:33.487 [2024-11-20 07:29:57.572180] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:33.487 [2024-11-20 07:29:57.572327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88391 ] 00:32:33.487 [2024-11-20 07:29:57.735707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.745 [2024-11-20 07:29:57.848744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.004 [2024-11-20 07:29:58.045127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.004 [2024-11-20 07:29:58.045212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 BaseBdev1_malloc 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 [2024-11-20 07:29:58.601589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:34.572 [2024-11-20 07:29:58.601681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.572 [2024-11-20 07:29:58.601708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:34.572 [2024-11-20 07:29:58.601724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.572 [2024-11-20 07:29:58.604252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.572 [2024-11-20 07:29:58.604484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:34.572 BaseBdev1 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 BaseBdev2_malloc 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 [2024-11-20 07:29:58.656227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:34.572 [2024-11-20 07:29:58.656306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.572 [2024-11-20 07:29:58.656330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:34.572 [2024-11-20 07:29:58.656347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.572 [2024-11-20 07:29:58.658833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.572 [2024-11-20 07:29:58.659068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:34.572 BaseBdev2 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 spare_malloc 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 spare_delay 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 [2024-11-20 07:29:58.733562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:34.572 [2024-11-20 07:29:58.733743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.572 [2024-11-20 07:29:58.733780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:34.572 [2024-11-20 07:29:58.733815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.572 [2024-11-20 07:29:58.736757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.572 [2024-11-20 07:29:58.736807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:34.572 spare 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.572 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.572 [2024-11-20 07:29:58.745815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:34.572 [2024-11-20 07:29:58.748811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:34.572 [2024-11-20 07:29:58.749275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:34.572 [2024-11-20 07:29:58.749408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:34.572 [2024-11-20 07:29:58.749557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:34.572 [2024-11-20 07:29:58.749821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:34.572 [2024-11-20 07:29:58.749840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:34.572 [2024-11-20 07:29:58.750090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.573 "name": "raid_bdev1", 00:32:34.573 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:34.573 "strip_size_kb": 0, 00:32:34.573 "state": "online", 00:32:34.573 "raid_level": "raid1", 00:32:34.573 "superblock": true, 00:32:34.573 "num_base_bdevs": 2, 00:32:34.573 "num_base_bdevs_discovered": 2, 00:32:34.573 "num_base_bdevs_operational": 2, 00:32:34.573 "base_bdevs_list": [ 00:32:34.573 { 00:32:34.573 "name": "BaseBdev1", 00:32:34.573 "uuid": "d104149d-69de-559c-87a4-5758f14eae57", 00:32:34.573 "is_configured": true, 00:32:34.573 "data_offset": 256, 00:32:34.573 "data_size": 7936 00:32:34.573 }, 00:32:34.573 { 00:32:34.573 "name": "BaseBdev2", 00:32:34.573 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:34.573 "is_configured": true, 00:32:34.573 "data_offset": 256, 00:32:34.573 "data_size": 7936 00:32:34.573 } 00:32:34.573 ] 00:32:34.573 }' 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.573 07:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:35.141 [2024-11-20 07:29:59.310568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.141 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:35.142 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:35.420 [2024-11-20 07:29:59.642415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:35.420 /dev/nbd0 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:35.420 1+0 records in 00:32:35.420 1+0 records out 00:32:35.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480347 s, 8.5 MB/s 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:35.420 07:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:36.358 7936+0 records in 00:32:36.358 7936+0 records out 00:32:36.358 32505856 bytes (33 MB, 31 MiB) copied, 0.817454 s, 39.8 MB/s 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:36.358 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:36.618 [2024-11-20 07:30:00.808691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:36.618 [2024-11-20 07:30:00.820808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.618 "name": "raid_bdev1", 00:32:36.618 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:36.618 "strip_size_kb": 0, 00:32:36.618 "state": "online", 00:32:36.618 "raid_level": "raid1", 00:32:36.618 "superblock": true, 00:32:36.618 "num_base_bdevs": 2, 00:32:36.618 "num_base_bdevs_discovered": 1, 00:32:36.618 "num_base_bdevs_operational": 1, 00:32:36.618 "base_bdevs_list": [ 00:32:36.618 { 00:32:36.618 "name": null, 00:32:36.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.618 "is_configured": false, 00:32:36.618 "data_offset": 0, 00:32:36.618 "data_size": 7936 00:32:36.618 }, 00:32:36.618 { 00:32:36.618 "name": "BaseBdev2", 00:32:36.618 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:36.618 "is_configured": true, 00:32:36.618 "data_offset": 256, 00:32:36.618 "data_size": 7936 00:32:36.618 } 00:32:36.618 ] 00:32:36.618 }' 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.618 07:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.185 07:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:37.185 07:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.185 07:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.185 [2024-11-20 07:30:01.341003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:37.185 [2024-11-20 07:30:01.353310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:32:37.185 07:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.185 07:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:37.185 [2024-11-20 07:30:01.355938] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:38.120 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:38.380 "name": "raid_bdev1", 00:32:38.380 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:38.380 "strip_size_kb": 0, 00:32:38.380 "state": "online", 00:32:38.380 "raid_level": "raid1", 00:32:38.380 "superblock": true, 00:32:38.380 "num_base_bdevs": 2, 00:32:38.380 "num_base_bdevs_discovered": 2, 00:32:38.380 "num_base_bdevs_operational": 2, 00:32:38.380 "process": { 00:32:38.380 "type": "rebuild", 00:32:38.380 "target": "spare", 00:32:38.380 "progress": { 00:32:38.380 "blocks": 2560, 00:32:38.380 "percent": 32 00:32:38.380 } 00:32:38.380 }, 00:32:38.380 "base_bdevs_list": [ 00:32:38.380 { 00:32:38.380 "name": "spare", 00:32:38.380 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:38.380 "is_configured": true, 00:32:38.380 "data_offset": 256, 00:32:38.380 "data_size": 7936 00:32:38.380 }, 00:32:38.380 { 00:32:38.380 "name": "BaseBdev2", 00:32:38.380 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:38.380 "is_configured": true, 00:32:38.380 "data_offset": 256, 00:32:38.380 "data_size": 7936 00:32:38.380 } 00:32:38.380 ] 00:32:38.380 }' 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:38.380 [2024-11-20 07:30:02.531432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:38.380 [2024-11-20 07:30:02.565963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:38.380 [2024-11-20 07:30:02.566104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:38.380 [2024-11-20 07:30:02.566128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:38.380 [2024-11-20 07:30:02.566143] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:38.380 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.381 "name": "raid_bdev1", 00:32:38.381 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:38.381 "strip_size_kb": 0, 00:32:38.381 "state": "online", 00:32:38.381 "raid_level": "raid1", 00:32:38.381 "superblock": true, 00:32:38.381 "num_base_bdevs": 2, 00:32:38.381 "num_base_bdevs_discovered": 1, 00:32:38.381 "num_base_bdevs_operational": 1, 00:32:38.381 "base_bdevs_list": [ 00:32:38.381 { 00:32:38.381 "name": null, 00:32:38.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.381 "is_configured": false, 00:32:38.381 "data_offset": 0, 00:32:38.381 "data_size": 7936 00:32:38.381 }, 00:32:38.381 { 00:32:38.381 "name": "BaseBdev2", 00:32:38.381 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:38.381 "is_configured": true, 00:32:38.381 "data_offset": 256, 00:32:38.381 "data_size": 7936 00:32:38.381 } 00:32:38.381 ] 00:32:38.381 }' 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.381 07:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:38.949 "name": "raid_bdev1", 00:32:38.949 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:38.949 "strip_size_kb": 0, 00:32:38.949 "state": "online", 00:32:38.949 "raid_level": "raid1", 00:32:38.949 "superblock": true, 00:32:38.949 "num_base_bdevs": 2, 00:32:38.949 "num_base_bdevs_discovered": 1, 00:32:38.949 "num_base_bdevs_operational": 1, 00:32:38.949 "base_bdevs_list": [ 00:32:38.949 { 00:32:38.949 "name": null, 00:32:38.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.949 "is_configured": false, 00:32:38.949 "data_offset": 0, 00:32:38.949 "data_size": 7936 00:32:38.949 }, 00:32:38.949 { 00:32:38.949 "name": "BaseBdev2", 00:32:38.949 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:38.949 "is_configured": true, 00:32:38.949 "data_offset": 256, 00:32:38.949 "data_size": 7936 00:32:38.949 } 00:32:38.949 ] 00:32:38.949 }' 00:32:38.949 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.209 [2024-11-20 07:30:03.300462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:39.209 [2024-11-20 07:30:03.315476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.209 07:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:39.209 [2024-11-20 07:30:03.318264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:40.146 "name": "raid_bdev1", 00:32:40.146 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:40.146 "strip_size_kb": 0, 00:32:40.146 "state": "online", 00:32:40.146 "raid_level": "raid1", 00:32:40.146 "superblock": true, 00:32:40.146 "num_base_bdevs": 2, 00:32:40.146 "num_base_bdevs_discovered": 2, 00:32:40.146 "num_base_bdevs_operational": 2, 00:32:40.146 "process": { 00:32:40.146 "type": "rebuild", 00:32:40.146 "target": "spare", 00:32:40.146 "progress": { 00:32:40.146 "blocks": 2560, 00:32:40.146 "percent": 32 00:32:40.146 } 00:32:40.146 }, 00:32:40.146 "base_bdevs_list": [ 00:32:40.146 { 00:32:40.146 "name": "spare", 00:32:40.146 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:40.146 "is_configured": true, 00:32:40.146 "data_offset": 256, 00:32:40.146 "data_size": 7936 00:32:40.146 }, 00:32:40.146 { 00:32:40.146 "name": "BaseBdev2", 00:32:40.146 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:40.146 "is_configured": true, 00:32:40.146 "data_offset": 256, 00:32:40.146 "data_size": 7936 00:32:40.146 } 00:32:40.146 ] 00:32:40.146 }' 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:40.146 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:40.405 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=768 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:40.405 "name": "raid_bdev1", 00:32:40.405 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:40.405 "strip_size_kb": 0, 00:32:40.405 "state": "online", 00:32:40.405 "raid_level": "raid1", 00:32:40.405 "superblock": true, 00:32:40.405 "num_base_bdevs": 2, 00:32:40.405 "num_base_bdevs_discovered": 2, 00:32:40.405 "num_base_bdevs_operational": 2, 00:32:40.405 "process": { 00:32:40.405 "type": "rebuild", 00:32:40.405 "target": "spare", 00:32:40.405 "progress": { 00:32:40.405 "blocks": 2816, 00:32:40.405 "percent": 35 00:32:40.405 } 00:32:40.405 }, 00:32:40.405 "base_bdevs_list": [ 00:32:40.405 { 00:32:40.405 "name": "spare", 00:32:40.405 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:40.405 "is_configured": true, 00:32:40.405 "data_offset": 256, 00:32:40.405 "data_size": 7936 00:32:40.405 }, 00:32:40.405 { 00:32:40.405 "name": "BaseBdev2", 00:32:40.405 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:40.405 "is_configured": true, 00:32:40.405 "data_offset": 256, 00:32:40.405 "data_size": 7936 00:32:40.405 } 00:32:40.405 ] 00:32:40.405 }' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:40.405 07:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:41.812 "name": "raid_bdev1", 00:32:41.812 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:41.812 "strip_size_kb": 0, 00:32:41.812 "state": "online", 00:32:41.812 "raid_level": "raid1", 00:32:41.812 "superblock": true, 00:32:41.812 "num_base_bdevs": 2, 00:32:41.812 "num_base_bdevs_discovered": 2, 00:32:41.812 "num_base_bdevs_operational": 2, 00:32:41.812 "process": { 00:32:41.812 "type": "rebuild", 00:32:41.812 "target": "spare", 00:32:41.812 "progress": { 00:32:41.812 "blocks": 5888, 00:32:41.812 "percent": 74 00:32:41.812 } 00:32:41.812 }, 00:32:41.812 "base_bdevs_list": [ 00:32:41.812 { 00:32:41.812 "name": "spare", 00:32:41.812 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:41.812 "is_configured": true, 00:32:41.812 "data_offset": 256, 00:32:41.812 "data_size": 7936 00:32:41.812 }, 00:32:41.812 { 00:32:41.812 "name": "BaseBdev2", 00:32:41.812 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:41.812 "is_configured": true, 00:32:41.812 "data_offset": 256, 00:32:41.812 "data_size": 7936 00:32:41.812 } 00:32:41.812 ] 00:32:41.812 }' 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:41.812 07:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:42.379 [2024-11-20 07:30:06.441780] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:42.379 [2024-11-20 07:30:06.441874] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:42.379 [2024-11-20 07:30:06.442027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.638 "name": "raid_bdev1", 00:32:42.638 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:42.638 "strip_size_kb": 0, 00:32:42.638 "state": "online", 00:32:42.638 "raid_level": "raid1", 00:32:42.638 "superblock": true, 00:32:42.638 "num_base_bdevs": 2, 00:32:42.638 "num_base_bdevs_discovered": 2, 00:32:42.638 "num_base_bdevs_operational": 2, 00:32:42.638 "base_bdevs_list": [ 00:32:42.638 { 00:32:42.638 "name": "spare", 00:32:42.638 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:42.638 "is_configured": true, 00:32:42.638 "data_offset": 256, 00:32:42.638 "data_size": 7936 00:32:42.638 }, 00:32:42.638 { 00:32:42.638 "name": "BaseBdev2", 00:32:42.638 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:42.638 "is_configured": true, 00:32:42.638 "data_offset": 256, 00:32:42.638 "data_size": 7936 00:32:42.638 } 00:32:42.638 ] 00:32:42.638 }' 00:32:42.638 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.897 07:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.897 "name": "raid_bdev1", 00:32:42.897 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:42.897 "strip_size_kb": 0, 00:32:42.897 "state": "online", 00:32:42.897 "raid_level": "raid1", 00:32:42.897 "superblock": true, 00:32:42.897 "num_base_bdevs": 2, 00:32:42.897 "num_base_bdevs_discovered": 2, 00:32:42.897 "num_base_bdevs_operational": 2, 00:32:42.897 "base_bdevs_list": [ 00:32:42.897 { 00:32:42.897 "name": "spare", 00:32:42.897 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:42.897 "is_configured": true, 00:32:42.897 "data_offset": 256, 00:32:42.897 "data_size": 7936 00:32:42.897 }, 00:32:42.897 { 00:32:42.897 "name": "BaseBdev2", 00:32:42.897 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:42.897 "is_configured": true, 00:32:42.897 "data_offset": 256, 00:32:42.897 "data_size": 7936 00:32:42.897 } 00:32:42.897 ] 00:32:42.897 }' 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:42.897 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.156 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:43.156 "name": "raid_bdev1", 00:32:43.156 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:43.156 "strip_size_kb": 0, 00:32:43.156 "state": "online", 00:32:43.156 "raid_level": "raid1", 00:32:43.156 "superblock": true, 00:32:43.156 "num_base_bdevs": 2, 00:32:43.156 "num_base_bdevs_discovered": 2, 00:32:43.156 "num_base_bdevs_operational": 2, 00:32:43.156 "base_bdevs_list": [ 00:32:43.156 { 00:32:43.156 "name": "spare", 00:32:43.156 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:43.156 "is_configured": true, 00:32:43.156 "data_offset": 256, 00:32:43.156 "data_size": 7936 00:32:43.156 }, 00:32:43.156 { 00:32:43.156 "name": "BaseBdev2", 00:32:43.156 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:43.156 "is_configured": true, 00:32:43.156 "data_offset": 256, 00:32:43.156 "data_size": 7936 00:32:43.156 } 00:32:43.156 ] 00:32:43.156 }' 00:32:43.156 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:43.156 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.414 [2024-11-20 07:30:07.675575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:43.414 [2024-11-20 07:30:07.675774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:43.414 [2024-11-20 07:30:07.676032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:43.414 [2024-11-20 07:30:07.676248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:43.414 [2024-11-20 07:30:07.676275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.414 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:43.672 07:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:43.931 /dev/nbd0 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:43.931 1+0 records in 00:32:43.931 1+0 records out 00:32:43.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459493 s, 8.9 MB/s 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:43.931 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:44.206 /dev/nbd1 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:44.206 1+0 records in 00:32:44.206 1+0 records out 00:32:44.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490144 s, 8.4 MB/s 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:44.206 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:44.207 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:32:44.207 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:44.207 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:44.207 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:44.479 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:44.738 07:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:44.998 [2024-11-20 07:30:09.159223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:44.998 [2024-11-20 07:30:09.159286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:44.998 [2024-11-20 07:30:09.159318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:44.998 [2024-11-20 07:30:09.159333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:44.998 [2024-11-20 07:30:09.162117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:44.998 [2024-11-20 07:30:09.162158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:44.998 [2024-11-20 07:30:09.162254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:44.998 [2024-11-20 07:30:09.162325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:44.998 [2024-11-20 07:30:09.162510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:44.998 spare 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:44.998 [2024-11-20 07:30:09.262643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:44.998 [2024-11-20 07:30:09.262675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:44.998 [2024-11-20 07:30:09.262787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:32:44.998 [2024-11-20 07:30:09.263017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:44.998 [2024-11-20 07:30:09.263192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:44.998 [2024-11-20 07:30:09.263395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:44.998 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.999 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.258 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.258 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.258 "name": "raid_bdev1", 00:32:45.258 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:45.258 "strip_size_kb": 0, 00:32:45.258 "state": "online", 00:32:45.258 "raid_level": "raid1", 00:32:45.258 "superblock": true, 00:32:45.258 "num_base_bdevs": 2, 00:32:45.258 "num_base_bdevs_discovered": 2, 00:32:45.258 "num_base_bdevs_operational": 2, 00:32:45.258 "base_bdevs_list": [ 00:32:45.258 { 00:32:45.258 "name": "spare", 00:32:45.258 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:45.258 "is_configured": true, 00:32:45.258 "data_offset": 256, 00:32:45.258 "data_size": 7936 00:32:45.258 }, 00:32:45.258 { 00:32:45.258 "name": "BaseBdev2", 00:32:45.258 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:45.258 "is_configured": true, 00:32:45.258 "data_offset": 256, 00:32:45.258 "data_size": 7936 00:32:45.258 } 00:32:45.258 ] 00:32:45.258 }' 00:32:45.258 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.258 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.516 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:45.776 "name": "raid_bdev1", 00:32:45.776 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:45.776 "strip_size_kb": 0, 00:32:45.776 "state": "online", 00:32:45.776 "raid_level": "raid1", 00:32:45.776 "superblock": true, 00:32:45.776 "num_base_bdevs": 2, 00:32:45.776 "num_base_bdevs_discovered": 2, 00:32:45.776 "num_base_bdevs_operational": 2, 00:32:45.776 "base_bdevs_list": [ 00:32:45.776 { 00:32:45.776 "name": "spare", 00:32:45.776 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:45.776 "is_configured": true, 00:32:45.776 "data_offset": 256, 00:32:45.776 "data_size": 7936 00:32:45.776 }, 00:32:45.776 { 00:32:45.776 "name": "BaseBdev2", 00:32:45.776 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:45.776 "is_configured": true, 00:32:45.776 "data_offset": 256, 00:32:45.776 "data_size": 7936 00:32:45.776 } 00:32:45.776 ] 00:32:45.776 }' 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.776 [2024-11-20 07:30:09.991791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.776 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.776 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.776 07:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.776 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.776 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.776 "name": "raid_bdev1", 00:32:45.776 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:45.776 "strip_size_kb": 0, 00:32:45.776 "state": "online", 00:32:45.776 "raid_level": "raid1", 00:32:45.776 "superblock": true, 00:32:45.776 "num_base_bdevs": 2, 00:32:45.776 "num_base_bdevs_discovered": 1, 00:32:45.776 "num_base_bdevs_operational": 1, 00:32:45.776 "base_bdevs_list": [ 00:32:45.776 { 00:32:45.776 "name": null, 00:32:45.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.776 "is_configured": false, 00:32:45.776 "data_offset": 0, 00:32:45.776 "data_size": 7936 00:32:45.776 }, 00:32:45.776 { 00:32:45.776 "name": "BaseBdev2", 00:32:45.776 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:45.776 "is_configured": true, 00:32:45.776 "data_offset": 256, 00:32:45.776 "data_size": 7936 00:32:45.776 } 00:32:45.776 ] 00:32:45.776 }' 00:32:45.776 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.776 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:46.345 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:46.345 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.345 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:46.345 [2024-11-20 07:30:10.523963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:46.345 [2024-11-20 07:30:10.524194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:46.345 [2024-11-20 07:30:10.524217] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:46.345 [2024-11-20 07:30:10.524283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:46.345 [2024-11-20 07:30:10.536073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:32:46.345 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.345 07:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:46.345 [2024-11-20 07:30:10.538575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.280 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:47.540 "name": "raid_bdev1", 00:32:47.540 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:47.540 "strip_size_kb": 0, 00:32:47.540 "state": "online", 00:32:47.540 "raid_level": "raid1", 00:32:47.540 "superblock": true, 00:32:47.540 "num_base_bdevs": 2, 00:32:47.540 "num_base_bdevs_discovered": 2, 00:32:47.540 "num_base_bdevs_operational": 2, 00:32:47.540 "process": { 00:32:47.540 "type": "rebuild", 00:32:47.540 "target": "spare", 00:32:47.540 "progress": { 00:32:47.540 "blocks": 2560, 00:32:47.540 "percent": 32 00:32:47.540 } 00:32:47.540 }, 00:32:47.540 "base_bdevs_list": [ 00:32:47.540 { 00:32:47.540 "name": "spare", 00:32:47.540 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:47.540 "is_configured": true, 00:32:47.540 "data_offset": 256, 00:32:47.540 "data_size": 7936 00:32:47.540 }, 00:32:47.540 { 00:32:47.540 "name": "BaseBdev2", 00:32:47.540 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:47.540 "is_configured": true, 00:32:47.540 "data_offset": 256, 00:32:47.540 "data_size": 7936 00:32:47.540 } 00:32:47.540 ] 00:32:47.540 }' 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.540 [2024-11-20 07:30:11.712383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:47.540 [2024-11-20 07:30:11.747717] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:47.540 [2024-11-20 07:30:11.747802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.540 [2024-11-20 07:30:11.747823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:47.540 [2024-11-20 07:30:11.747846] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.540 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.799 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.799 "name": "raid_bdev1", 00:32:47.799 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:47.799 "strip_size_kb": 0, 00:32:47.799 "state": "online", 00:32:47.799 "raid_level": "raid1", 00:32:47.799 "superblock": true, 00:32:47.799 "num_base_bdevs": 2, 00:32:47.799 "num_base_bdevs_discovered": 1, 00:32:47.799 "num_base_bdevs_operational": 1, 00:32:47.799 "base_bdevs_list": [ 00:32:47.799 { 00:32:47.799 "name": null, 00:32:47.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.799 "is_configured": false, 00:32:47.799 "data_offset": 0, 00:32:47.799 "data_size": 7936 00:32:47.799 }, 00:32:47.799 { 00:32:47.799 "name": "BaseBdev2", 00:32:47.799 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:47.799 "is_configured": true, 00:32:47.799 "data_offset": 256, 00:32:47.799 "data_size": 7936 00:32:47.799 } 00:32:47.799 ] 00:32:47.799 }' 00:32:47.799 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.799 07:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.058 07:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:48.058 07:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.058 07:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.058 [2024-11-20 07:30:12.300968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:48.058 [2024-11-20 07:30:12.301058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.058 [2024-11-20 07:30:12.301087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:48.058 [2024-11-20 07:30:12.301103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.058 [2024-11-20 07:30:12.301387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.058 [2024-11-20 07:30:12.301422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:48.058 [2024-11-20 07:30:12.301495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:48.058 [2024-11-20 07:30:12.301547] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:48.058 [2024-11-20 07:30:12.301559] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:48.058 [2024-11-20 07:30:12.301587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:48.058 [2024-11-20 07:30:12.313536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:32:48.058 spare 00:32:48.058 07:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.058 07:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:48.058 [2024-11-20 07:30:12.316033] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:49.437 "name": "raid_bdev1", 00:32:49.437 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:49.437 "strip_size_kb": 0, 00:32:49.437 "state": "online", 00:32:49.437 "raid_level": "raid1", 00:32:49.437 "superblock": true, 00:32:49.437 "num_base_bdevs": 2, 00:32:49.437 "num_base_bdevs_discovered": 2, 00:32:49.437 "num_base_bdevs_operational": 2, 00:32:49.437 "process": { 00:32:49.437 "type": "rebuild", 00:32:49.437 "target": "spare", 00:32:49.437 "progress": { 00:32:49.437 "blocks": 2560, 00:32:49.437 "percent": 32 00:32:49.437 } 00:32:49.437 }, 00:32:49.437 "base_bdevs_list": [ 00:32:49.437 { 00:32:49.437 "name": "spare", 00:32:49.437 "uuid": "3310326d-71b8-5cd8-bb96-6c120c0fb501", 00:32:49.437 "is_configured": true, 00:32:49.437 "data_offset": 256, 00:32:49.437 "data_size": 7936 00:32:49.437 }, 00:32:49.437 { 00:32:49.437 "name": "BaseBdev2", 00:32:49.437 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:49.437 "is_configured": true, 00:32:49.437 "data_offset": 256, 00:32:49.437 "data_size": 7936 00:32:49.437 } 00:32:49.437 ] 00:32:49.437 }' 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:49.437 [2024-11-20 07:30:13.490245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:49.437 [2024-11-20 07:30:13.524787] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:49.437 [2024-11-20 07:30:13.524868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.437 [2024-11-20 07:30:13.524892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:49.437 [2024-11-20 07:30:13.524902] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.437 "name": "raid_bdev1", 00:32:49.437 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:49.437 "strip_size_kb": 0, 00:32:49.437 "state": "online", 00:32:49.437 "raid_level": "raid1", 00:32:49.437 "superblock": true, 00:32:49.437 "num_base_bdevs": 2, 00:32:49.437 "num_base_bdevs_discovered": 1, 00:32:49.437 "num_base_bdevs_operational": 1, 00:32:49.437 "base_bdevs_list": [ 00:32:49.437 { 00:32:49.437 "name": null, 00:32:49.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.437 "is_configured": false, 00:32:49.437 "data_offset": 0, 00:32:49.437 "data_size": 7936 00:32:49.437 }, 00:32:49.437 { 00:32:49.437 "name": "BaseBdev2", 00:32:49.437 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:49.437 "is_configured": true, 00:32:49.437 "data_offset": 256, 00:32:49.437 "data_size": 7936 00:32:49.437 } 00:32:49.437 ] 00:32:49.437 }' 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.437 07:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:50.006 "name": "raid_bdev1", 00:32:50.006 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:50.006 "strip_size_kb": 0, 00:32:50.006 "state": "online", 00:32:50.006 "raid_level": "raid1", 00:32:50.006 "superblock": true, 00:32:50.006 "num_base_bdevs": 2, 00:32:50.006 "num_base_bdevs_discovered": 1, 00:32:50.006 "num_base_bdevs_operational": 1, 00:32:50.006 "base_bdevs_list": [ 00:32:50.006 { 00:32:50.006 "name": null, 00:32:50.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.006 "is_configured": false, 00:32:50.006 "data_offset": 0, 00:32:50.006 "data_size": 7936 00:32:50.006 }, 00:32:50.006 { 00:32:50.006 "name": "BaseBdev2", 00:32:50.006 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:50.006 "is_configured": true, 00:32:50.006 "data_offset": 256, 00:32:50.006 "data_size": 7936 00:32:50.006 } 00:32:50.006 ] 00:32:50.006 }' 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.006 [2024-11-20 07:30:14.230260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:50.006 [2024-11-20 07:30:14.230323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.006 [2024-11-20 07:30:14.230357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:50.006 [2024-11-20 07:30:14.230370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.006 [2024-11-20 07:30:14.230694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.006 [2024-11-20 07:30:14.230716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:50.006 [2024-11-20 07:30:14.230785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:50.006 [2024-11-20 07:30:14.230805] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:50.006 [2024-11-20 07:30:14.230821] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:50.006 [2024-11-20 07:30:14.230834] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:50.006 BaseBdev1 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.006 07:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:51.383 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.384 "name": "raid_bdev1", 00:32:51.384 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:51.384 "strip_size_kb": 0, 00:32:51.384 "state": "online", 00:32:51.384 "raid_level": "raid1", 00:32:51.384 "superblock": true, 00:32:51.384 "num_base_bdevs": 2, 00:32:51.384 "num_base_bdevs_discovered": 1, 00:32:51.384 "num_base_bdevs_operational": 1, 00:32:51.384 "base_bdevs_list": [ 00:32:51.384 { 00:32:51.384 "name": null, 00:32:51.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.384 "is_configured": false, 00:32:51.384 "data_offset": 0, 00:32:51.384 "data_size": 7936 00:32:51.384 }, 00:32:51.384 { 00:32:51.384 "name": "BaseBdev2", 00:32:51.384 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:51.384 "is_configured": true, 00:32:51.384 "data_offset": 256, 00:32:51.384 "data_size": 7936 00:32:51.384 } 00:32:51.384 ] 00:32:51.384 }' 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.384 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:51.643 "name": "raid_bdev1", 00:32:51.643 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:51.643 "strip_size_kb": 0, 00:32:51.643 "state": "online", 00:32:51.643 "raid_level": "raid1", 00:32:51.643 "superblock": true, 00:32:51.643 "num_base_bdevs": 2, 00:32:51.643 "num_base_bdevs_discovered": 1, 00:32:51.643 "num_base_bdevs_operational": 1, 00:32:51.643 "base_bdevs_list": [ 00:32:51.643 { 00:32:51.643 "name": null, 00:32:51.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.643 "is_configured": false, 00:32:51.643 "data_offset": 0, 00:32:51.643 "data_size": 7936 00:32:51.643 }, 00:32:51.643 { 00:32:51.643 "name": "BaseBdev2", 00:32:51.643 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:51.643 "is_configured": true, 00:32:51.643 "data_offset": 256, 00:32:51.643 "data_size": 7936 00:32:51.643 } 00:32:51.643 ] 00:32:51.643 }' 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:51.643 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.905 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:51.905 [2024-11-20 07:30:15.950886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:51.905 [2024-11-20 07:30:15.951140] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:51.906 [2024-11-20 07:30:15.951169] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:51.906 request: 00:32:51.906 { 00:32:51.906 "base_bdev": "BaseBdev1", 00:32:51.906 "raid_bdev": "raid_bdev1", 00:32:51.906 "method": "bdev_raid_add_base_bdev", 00:32:51.906 "req_id": 1 00:32:51.906 } 00:32:51.906 Got JSON-RPC error response 00:32:51.906 response: 00:32:51.906 { 00:32:51.906 "code": -22, 00:32:51.906 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:51.906 } 00:32:51.906 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:51.906 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:32:51.906 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.906 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.906 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.906 07:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.845 07:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.845 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.845 "name": "raid_bdev1", 00:32:52.845 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:52.845 "strip_size_kb": 0, 00:32:52.845 "state": "online", 00:32:52.845 "raid_level": "raid1", 00:32:52.845 "superblock": true, 00:32:52.845 "num_base_bdevs": 2, 00:32:52.845 "num_base_bdevs_discovered": 1, 00:32:52.845 "num_base_bdevs_operational": 1, 00:32:52.845 "base_bdevs_list": [ 00:32:52.845 { 00:32:52.845 "name": null, 00:32:52.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.845 "is_configured": false, 00:32:52.845 "data_offset": 0, 00:32:52.845 "data_size": 7936 00:32:52.845 }, 00:32:52.845 { 00:32:52.845 "name": "BaseBdev2", 00:32:52.845 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:52.845 "is_configured": true, 00:32:52.845 "data_offset": 256, 00:32:52.845 "data_size": 7936 00:32:52.845 } 00:32:52.845 ] 00:32:52.845 }' 00:32:52.845 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.845 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:53.413 "name": "raid_bdev1", 00:32:53.413 "uuid": "d7552255-c303-40d0-8f38-11acface50d4", 00:32:53.413 "strip_size_kb": 0, 00:32:53.413 "state": "online", 00:32:53.413 "raid_level": "raid1", 00:32:53.413 "superblock": true, 00:32:53.413 "num_base_bdevs": 2, 00:32:53.413 "num_base_bdevs_discovered": 1, 00:32:53.413 "num_base_bdevs_operational": 1, 00:32:53.413 "base_bdevs_list": [ 00:32:53.413 { 00:32:53.413 "name": null, 00:32:53.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.413 "is_configured": false, 00:32:53.413 "data_offset": 0, 00:32:53.413 "data_size": 7936 00:32:53.413 }, 00:32:53.413 { 00:32:53.413 "name": "BaseBdev2", 00:32:53.413 "uuid": "05c915da-fb0e-53ba-b41c-19ba34bac3cb", 00:32:53.413 "is_configured": true, 00:32:53.413 "data_offset": 256, 00:32:53.413 "data_size": 7936 00:32:53.413 } 00:32:53.413 ] 00:32:53.413 }' 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88391 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88391 ']' 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88391 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88391 00:32:53.413 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:53.413 killing process with pid 88391 00:32:53.413 Received shutdown signal, test time was about 60.000000 seconds 00:32:53.413 00:32:53.414 Latency(us) 00:32:53.414 [2024-11-20T07:30:17.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.414 [2024-11-20T07:30:17.703Z] =================================================================================================================== 00:32:53.414 [2024-11-20T07:30:17.703Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:53.414 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:53.414 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88391' 00:32:53.414 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88391 00:32:53.414 [2024-11-20 07:30:17.686775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:53.414 07:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88391 00:32:53.414 [2024-11-20 07:30:17.686923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:53.414 [2024-11-20 07:30:17.687005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:53.414 [2024-11-20 07:30:17.687022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:53.672 [2024-11-20 07:30:17.943464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:54.608 07:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:32:54.608 00:32:54.608 real 0m21.401s 00:32:54.608 user 0m29.197s 00:32:54.608 sys 0m2.530s 00:32:54.608 07:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.608 07:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.608 ************************************ 00:32:54.608 END TEST raid_rebuild_test_sb_md_separate 00:32:54.608 ************************************ 00:32:54.868 07:30:18 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:32:54.868 07:30:18 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:32:54.868 07:30:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:54.868 07:30:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.868 07:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:54.868 ************************************ 00:32:54.868 START TEST raid_state_function_test_sb_md_interleaved 00:32:54.868 ************************************ 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:54.868 Process raid pid: 89093 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89093 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89093' 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89093 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89093 ']' 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.868 07:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:54.868 [2024-11-20 07:30:19.057989] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:54.868 [2024-11-20 07:30:19.058377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.126 [2024-11-20 07:30:19.246107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.126 [2024-11-20 07:30:19.369826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.384 [2024-11-20 07:30:19.566832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.384 [2024-11-20 07:30:19.567061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.952 07:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.952 07:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:32:55.952 07:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:55.952 07:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.952 07:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:55.952 [2024-11-20 07:30:20.005074] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:55.952 [2024-11-20 07:30:20.005301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:55.952 [2024-11-20 07:30:20.005434] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:55.952 [2024-11-20 07:30:20.005496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.952 "name": "Existed_Raid", 00:32:55.952 "uuid": "a62b8e0c-7232-43be-9ea5-a4c9291206a2", 00:32:55.952 "strip_size_kb": 0, 00:32:55.952 "state": "configuring", 00:32:55.952 "raid_level": "raid1", 00:32:55.952 "superblock": true, 00:32:55.952 "num_base_bdevs": 2, 00:32:55.952 "num_base_bdevs_discovered": 0, 00:32:55.952 "num_base_bdevs_operational": 2, 00:32:55.952 "base_bdevs_list": [ 00:32:55.952 { 00:32:55.952 "name": "BaseBdev1", 00:32:55.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.952 "is_configured": false, 00:32:55.952 "data_offset": 0, 00:32:55.952 "data_size": 0 00:32:55.952 }, 00:32:55.952 { 00:32:55.952 "name": "BaseBdev2", 00:32:55.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.952 "is_configured": false, 00:32:55.952 "data_offset": 0, 00:32:55.952 "data_size": 0 00:32:55.952 } 00:32:55.952 ] 00:32:55.952 }' 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.952 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.521 [2024-11-20 07:30:20.529167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:56.521 [2024-11-20 07:30:20.529207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.521 [2024-11-20 07:30:20.541126] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:56.521 [2024-11-20 07:30:20.541378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:56.521 [2024-11-20 07:30:20.541513] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:56.521 [2024-11-20 07:30:20.541747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.521 [2024-11-20 07:30:20.591813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:56.521 BaseBdev1 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.521 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.522 [ 00:32:56.522 { 00:32:56.522 "name": "BaseBdev1", 00:32:56.522 "aliases": [ 00:32:56.522 "005b9d43-397c-4ee4-a678-3cb98269a270" 00:32:56.522 ], 00:32:56.522 "product_name": "Malloc disk", 00:32:56.522 "block_size": 4128, 00:32:56.522 "num_blocks": 8192, 00:32:56.522 "uuid": "005b9d43-397c-4ee4-a678-3cb98269a270", 00:32:56.522 "md_size": 32, 00:32:56.522 "md_interleave": true, 00:32:56.522 "dif_type": 0, 00:32:56.522 "assigned_rate_limits": { 00:32:56.522 "rw_ios_per_sec": 0, 00:32:56.522 "rw_mbytes_per_sec": 0, 00:32:56.522 "r_mbytes_per_sec": 0, 00:32:56.522 "w_mbytes_per_sec": 0 00:32:56.522 }, 00:32:56.522 "claimed": true, 00:32:56.522 "claim_type": "exclusive_write", 00:32:56.522 "zoned": false, 00:32:56.522 "supported_io_types": { 00:32:56.522 "read": true, 00:32:56.522 "write": true, 00:32:56.522 "unmap": true, 00:32:56.522 "flush": true, 00:32:56.522 "reset": true, 00:32:56.522 "nvme_admin": false, 00:32:56.522 "nvme_io": false, 00:32:56.522 "nvme_io_md": false, 00:32:56.522 "write_zeroes": true, 00:32:56.522 "zcopy": true, 00:32:56.522 "get_zone_info": false, 00:32:56.522 "zone_management": false, 00:32:56.522 "zone_append": false, 00:32:56.522 "compare": false, 00:32:56.522 "compare_and_write": false, 00:32:56.522 "abort": true, 00:32:56.522 "seek_hole": false, 00:32:56.522 "seek_data": false, 00:32:56.522 "copy": true, 00:32:56.522 "nvme_iov_md": false 00:32:56.522 }, 00:32:56.522 "memory_domains": [ 00:32:56.522 { 00:32:56.522 "dma_device_id": "system", 00:32:56.522 "dma_device_type": 1 00:32:56.522 }, 00:32:56.522 { 00:32:56.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.522 "dma_device_type": 2 00:32:56.522 } 00:32:56.522 ], 00:32:56.522 "driver_specific": {} 00:32:56.522 } 00:32:56.522 ] 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:56.522 "name": "Existed_Raid", 00:32:56.522 "uuid": "4e71d5fa-3c12-462e-b3ad-07ad77083edf", 00:32:56.522 "strip_size_kb": 0, 00:32:56.522 "state": "configuring", 00:32:56.522 "raid_level": "raid1", 00:32:56.522 "superblock": true, 00:32:56.522 "num_base_bdevs": 2, 00:32:56.522 "num_base_bdevs_discovered": 1, 00:32:56.522 "num_base_bdevs_operational": 2, 00:32:56.522 "base_bdevs_list": [ 00:32:56.522 { 00:32:56.522 "name": "BaseBdev1", 00:32:56.522 "uuid": "005b9d43-397c-4ee4-a678-3cb98269a270", 00:32:56.522 "is_configured": true, 00:32:56.522 "data_offset": 256, 00:32:56.522 "data_size": 7936 00:32:56.522 }, 00:32:56.522 { 00:32:56.522 "name": "BaseBdev2", 00:32:56.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.522 "is_configured": false, 00:32:56.522 "data_offset": 0, 00:32:56.522 "data_size": 0 00:32:56.522 } 00:32:56.522 ] 00:32:56.522 }' 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:56.522 07:30:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.091 [2024-11-20 07:30:21.144127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:57.091 [2024-11-20 07:30:21.144194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.091 [2024-11-20 07:30:21.152192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:57.091 [2024-11-20 07:30:21.154935] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:57.091 [2024-11-20 07:30:21.155215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.091 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.091 "name": "Existed_Raid", 00:32:57.091 "uuid": "b2bb162d-72c5-4225-a759-6aab1907f067", 00:32:57.091 "strip_size_kb": 0, 00:32:57.091 "state": "configuring", 00:32:57.091 "raid_level": "raid1", 00:32:57.091 "superblock": true, 00:32:57.091 "num_base_bdevs": 2, 00:32:57.091 "num_base_bdevs_discovered": 1, 00:32:57.091 "num_base_bdevs_operational": 2, 00:32:57.091 "base_bdevs_list": [ 00:32:57.091 { 00:32:57.091 "name": "BaseBdev1", 00:32:57.091 "uuid": "005b9d43-397c-4ee4-a678-3cb98269a270", 00:32:57.091 "is_configured": true, 00:32:57.091 "data_offset": 256, 00:32:57.091 "data_size": 7936 00:32:57.091 }, 00:32:57.091 { 00:32:57.091 "name": "BaseBdev2", 00:32:57.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.092 "is_configured": false, 00:32:57.092 "data_offset": 0, 00:32:57.092 "data_size": 0 00:32:57.092 } 00:32:57.092 ] 00:32:57.092 }' 00:32:57.092 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.092 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.660 [2024-11-20 07:30:21.714569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:57.660 [2024-11-20 07:30:21.715161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:57.660 BaseBdev2 00:32:57.660 [2024-11-20 07:30:21.715320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:57.660 [2024-11-20 07:30:21.715465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:57.660 [2024-11-20 07:30:21.715597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:57.660 [2024-11-20 07:30:21.715643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:57.660 [2024-11-20 07:30:21.715769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.660 [ 00:32:57.660 { 00:32:57.660 "name": "BaseBdev2", 00:32:57.660 "aliases": [ 00:32:57.660 "6cc54c5b-f1c8-4f39-8fc1-8742dcb550fa" 00:32:57.660 ], 00:32:57.660 "product_name": "Malloc disk", 00:32:57.660 "block_size": 4128, 00:32:57.660 "num_blocks": 8192, 00:32:57.660 "uuid": "6cc54c5b-f1c8-4f39-8fc1-8742dcb550fa", 00:32:57.660 "md_size": 32, 00:32:57.660 "md_interleave": true, 00:32:57.660 "dif_type": 0, 00:32:57.660 "assigned_rate_limits": { 00:32:57.660 "rw_ios_per_sec": 0, 00:32:57.660 "rw_mbytes_per_sec": 0, 00:32:57.660 "r_mbytes_per_sec": 0, 00:32:57.660 "w_mbytes_per_sec": 0 00:32:57.660 }, 00:32:57.660 "claimed": true, 00:32:57.660 "claim_type": "exclusive_write", 00:32:57.660 "zoned": false, 00:32:57.660 "supported_io_types": { 00:32:57.660 "read": true, 00:32:57.660 "write": true, 00:32:57.660 "unmap": true, 00:32:57.660 "flush": true, 00:32:57.660 "reset": true, 00:32:57.660 "nvme_admin": false, 00:32:57.660 "nvme_io": false, 00:32:57.660 "nvme_io_md": false, 00:32:57.660 "write_zeroes": true, 00:32:57.660 "zcopy": true, 00:32:57.660 "get_zone_info": false, 00:32:57.660 "zone_management": false, 00:32:57.660 "zone_append": false, 00:32:57.660 "compare": false, 00:32:57.660 "compare_and_write": false, 00:32:57.660 "abort": true, 00:32:57.660 "seek_hole": false, 00:32:57.660 "seek_data": false, 00:32:57.660 "copy": true, 00:32:57.660 "nvme_iov_md": false 00:32:57.660 }, 00:32:57.660 "memory_domains": [ 00:32:57.660 { 00:32:57.660 "dma_device_id": "system", 00:32:57.660 "dma_device_type": 1 00:32:57.660 }, 00:32:57.660 { 00:32:57.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.660 "dma_device_type": 2 00:32:57.660 } 00:32:57.660 ], 00:32:57.660 "driver_specific": {} 00:32:57.660 } 00:32:57.660 ] 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.660 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.660 "name": "Existed_Raid", 00:32:57.661 "uuid": "b2bb162d-72c5-4225-a759-6aab1907f067", 00:32:57.661 "strip_size_kb": 0, 00:32:57.661 "state": "online", 00:32:57.661 "raid_level": "raid1", 00:32:57.661 "superblock": true, 00:32:57.661 "num_base_bdevs": 2, 00:32:57.661 "num_base_bdevs_discovered": 2, 00:32:57.661 "num_base_bdevs_operational": 2, 00:32:57.661 "base_bdevs_list": [ 00:32:57.661 { 00:32:57.661 "name": "BaseBdev1", 00:32:57.661 "uuid": "005b9d43-397c-4ee4-a678-3cb98269a270", 00:32:57.661 "is_configured": true, 00:32:57.661 "data_offset": 256, 00:32:57.661 "data_size": 7936 00:32:57.661 }, 00:32:57.661 { 00:32:57.661 "name": "BaseBdev2", 00:32:57.661 "uuid": "6cc54c5b-f1c8-4f39-8fc1-8742dcb550fa", 00:32:57.661 "is_configured": true, 00:32:57.661 "data_offset": 256, 00:32:57.661 "data_size": 7936 00:32:57.661 } 00:32:57.661 ] 00:32:57.661 }' 00:32:57.661 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.661 07:30:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.228 [2024-11-20 07:30:22.287298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.228 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.228 "name": "Existed_Raid", 00:32:58.228 "aliases": [ 00:32:58.228 "b2bb162d-72c5-4225-a759-6aab1907f067" 00:32:58.228 ], 00:32:58.228 "product_name": "Raid Volume", 00:32:58.228 "block_size": 4128, 00:32:58.228 "num_blocks": 7936, 00:32:58.228 "uuid": "b2bb162d-72c5-4225-a759-6aab1907f067", 00:32:58.228 "md_size": 32, 00:32:58.228 "md_interleave": true, 00:32:58.228 "dif_type": 0, 00:32:58.228 "assigned_rate_limits": { 00:32:58.228 "rw_ios_per_sec": 0, 00:32:58.228 "rw_mbytes_per_sec": 0, 00:32:58.228 "r_mbytes_per_sec": 0, 00:32:58.228 "w_mbytes_per_sec": 0 00:32:58.228 }, 00:32:58.228 "claimed": false, 00:32:58.228 "zoned": false, 00:32:58.228 "supported_io_types": { 00:32:58.228 "read": true, 00:32:58.228 "write": true, 00:32:58.228 "unmap": false, 00:32:58.228 "flush": false, 00:32:58.228 "reset": true, 00:32:58.228 "nvme_admin": false, 00:32:58.228 "nvme_io": false, 00:32:58.228 "nvme_io_md": false, 00:32:58.228 "write_zeroes": true, 00:32:58.228 "zcopy": false, 00:32:58.228 "get_zone_info": false, 00:32:58.228 "zone_management": false, 00:32:58.228 "zone_append": false, 00:32:58.229 "compare": false, 00:32:58.229 "compare_and_write": false, 00:32:58.229 "abort": false, 00:32:58.229 "seek_hole": false, 00:32:58.229 "seek_data": false, 00:32:58.229 "copy": false, 00:32:58.229 "nvme_iov_md": false 00:32:58.229 }, 00:32:58.229 "memory_domains": [ 00:32:58.229 { 00:32:58.229 "dma_device_id": "system", 00:32:58.229 "dma_device_type": 1 00:32:58.229 }, 00:32:58.229 { 00:32:58.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:58.229 "dma_device_type": 2 00:32:58.229 }, 00:32:58.229 { 00:32:58.229 "dma_device_id": "system", 00:32:58.229 "dma_device_type": 1 00:32:58.229 }, 00:32:58.229 { 00:32:58.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:58.229 "dma_device_type": 2 00:32:58.229 } 00:32:58.229 ], 00:32:58.229 "driver_specific": { 00:32:58.229 "raid": { 00:32:58.229 "uuid": "b2bb162d-72c5-4225-a759-6aab1907f067", 00:32:58.229 "strip_size_kb": 0, 00:32:58.229 "state": "online", 00:32:58.229 "raid_level": "raid1", 00:32:58.229 "superblock": true, 00:32:58.229 "num_base_bdevs": 2, 00:32:58.229 "num_base_bdevs_discovered": 2, 00:32:58.229 "num_base_bdevs_operational": 2, 00:32:58.229 "base_bdevs_list": [ 00:32:58.229 { 00:32:58.229 "name": "BaseBdev1", 00:32:58.229 "uuid": "005b9d43-397c-4ee4-a678-3cb98269a270", 00:32:58.229 "is_configured": true, 00:32:58.229 "data_offset": 256, 00:32:58.229 "data_size": 7936 00:32:58.229 }, 00:32:58.229 { 00:32:58.229 "name": "BaseBdev2", 00:32:58.229 "uuid": "6cc54c5b-f1c8-4f39-8fc1-8742dcb550fa", 00:32:58.229 "is_configured": true, 00:32:58.229 "data_offset": 256, 00:32:58.229 "data_size": 7936 00:32:58.229 } 00:32:58.229 ] 00:32:58.229 } 00:32:58.229 } 00:32:58.229 }' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:58.229 BaseBdev2' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.488 [2024-11-20 07:30:22.551046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:58.488 "name": "Existed_Raid", 00:32:58.488 "uuid": "b2bb162d-72c5-4225-a759-6aab1907f067", 00:32:58.488 "strip_size_kb": 0, 00:32:58.488 "state": "online", 00:32:58.488 "raid_level": "raid1", 00:32:58.488 "superblock": true, 00:32:58.488 "num_base_bdevs": 2, 00:32:58.488 "num_base_bdevs_discovered": 1, 00:32:58.488 "num_base_bdevs_operational": 1, 00:32:58.488 "base_bdevs_list": [ 00:32:58.488 { 00:32:58.488 "name": null, 00:32:58.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.488 "is_configured": false, 00:32:58.488 "data_offset": 0, 00:32:58.488 "data_size": 7936 00:32:58.488 }, 00:32:58.488 { 00:32:58.488 "name": "BaseBdev2", 00:32:58.488 "uuid": "6cc54c5b-f1c8-4f39-8fc1-8742dcb550fa", 00:32:58.488 "is_configured": true, 00:32:58.488 "data_offset": 256, 00:32:58.488 "data_size": 7936 00:32:58.488 } 00:32:58.488 ] 00:32:58.488 }' 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:58.488 07:30:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.056 [2024-11-20 07:30:23.225266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:59.056 [2024-11-20 07:30:23.225585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:59.056 [2024-11-20 07:30:23.305045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:59.056 [2024-11-20 07:30:23.305103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:59.056 [2024-11-20 07:30:23.305122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.056 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89093 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89093 ']' 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89093 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89093 00:32:59.315 killing process with pid 89093 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89093' 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89093 00:32:59.315 [2024-11-20 07:30:23.396683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:59.315 07:30:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89093 00:32:59.315 [2024-11-20 07:30:23.411217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:00.253 ************************************ 00:33:00.253 END TEST raid_state_function_test_sb_md_interleaved 00:33:00.253 ************************************ 00:33:00.253 07:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:33:00.253 00:33:00.253 real 0m5.458s 00:33:00.253 user 0m8.286s 00:33:00.253 sys 0m0.801s 00:33:00.253 07:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.253 07:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.253 07:30:24 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:00.253 07:30:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:00.253 07:30:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.253 07:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:00.253 ************************************ 00:33:00.253 START TEST raid_superblock_test_md_interleaved 00:33:00.253 ************************************ 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89350 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89350 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89350 ']' 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.253 07:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.513 [2024-11-20 07:30:24.563771] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:00.513 [2024-11-20 07:30:24.564952] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89350 ] 00:33:00.513 [2024-11-20 07:30:24.746041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.770 [2024-11-20 07:30:24.860778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.770 [2024-11-20 07:30:25.040829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:00.770 [2024-11-20 07:30:25.040892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 malloc1 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 [2024-11-20 07:30:25.539349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:01.337 [2024-11-20 07:30:25.539465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:01.337 [2024-11-20 07:30:25.539507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:01.337 [2024-11-20 07:30:25.539521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:01.337 [2024-11-20 07:30:25.541900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:01.337 [2024-11-20 07:30:25.541947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:01.337 pt1 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 malloc2 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 [2024-11-20 07:30:25.593724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:01.337 [2024-11-20 07:30:25.593848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:01.337 [2024-11-20 07:30:25.593879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:01.337 [2024-11-20 07:30:25.593893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:01.337 [2024-11-20 07:30:25.596519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:01.337 [2024-11-20 07:30:25.596561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:01.337 pt2 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 [2024-11-20 07:30:25.605809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:01.337 [2024-11-20 07:30:25.608493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:01.337 [2024-11-20 07:30:25.608902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:01.337 [2024-11-20 07:30:25.609080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:01.338 [2024-11-20 07:30:25.609216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:01.338 [2024-11-20 07:30:25.609434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:01.338 [2024-11-20 07:30:25.609616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:01.338 [2024-11-20 07:30:25.609884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.338 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.597 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.597 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.597 "name": "raid_bdev1", 00:33:01.597 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:01.597 "strip_size_kb": 0, 00:33:01.597 "state": "online", 00:33:01.597 "raid_level": "raid1", 00:33:01.597 "superblock": true, 00:33:01.597 "num_base_bdevs": 2, 00:33:01.597 "num_base_bdevs_discovered": 2, 00:33:01.597 "num_base_bdevs_operational": 2, 00:33:01.597 "base_bdevs_list": [ 00:33:01.597 { 00:33:01.597 "name": "pt1", 00:33:01.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:01.597 "is_configured": true, 00:33:01.597 "data_offset": 256, 00:33:01.597 "data_size": 7936 00:33:01.597 }, 00:33:01.597 { 00:33:01.597 "name": "pt2", 00:33:01.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.597 "is_configured": true, 00:33:01.597 "data_offset": 256, 00:33:01.597 "data_size": 7936 00:33:01.597 } 00:33:01.597 ] 00:33:01.597 }' 00:33:01.597 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.597 07:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.855 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.855 [2024-11-20 07:30:26.126466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:02.114 "name": "raid_bdev1", 00:33:02.114 "aliases": [ 00:33:02.114 "bc1b7301-bdb0-4d1f-8b62-70f27795a11f" 00:33:02.114 ], 00:33:02.114 "product_name": "Raid Volume", 00:33:02.114 "block_size": 4128, 00:33:02.114 "num_blocks": 7936, 00:33:02.114 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:02.114 "md_size": 32, 00:33:02.114 "md_interleave": true, 00:33:02.114 "dif_type": 0, 00:33:02.114 "assigned_rate_limits": { 00:33:02.114 "rw_ios_per_sec": 0, 00:33:02.114 "rw_mbytes_per_sec": 0, 00:33:02.114 "r_mbytes_per_sec": 0, 00:33:02.114 "w_mbytes_per_sec": 0 00:33:02.114 }, 00:33:02.114 "claimed": false, 00:33:02.114 "zoned": false, 00:33:02.114 "supported_io_types": { 00:33:02.114 "read": true, 00:33:02.114 "write": true, 00:33:02.114 "unmap": false, 00:33:02.114 "flush": false, 00:33:02.114 "reset": true, 00:33:02.114 "nvme_admin": false, 00:33:02.114 "nvme_io": false, 00:33:02.114 "nvme_io_md": false, 00:33:02.114 "write_zeroes": true, 00:33:02.114 "zcopy": false, 00:33:02.114 "get_zone_info": false, 00:33:02.114 "zone_management": false, 00:33:02.114 "zone_append": false, 00:33:02.114 "compare": false, 00:33:02.114 "compare_and_write": false, 00:33:02.114 "abort": false, 00:33:02.114 "seek_hole": false, 00:33:02.114 "seek_data": false, 00:33:02.114 "copy": false, 00:33:02.114 "nvme_iov_md": false 00:33:02.114 }, 00:33:02.114 "memory_domains": [ 00:33:02.114 { 00:33:02.114 "dma_device_id": "system", 00:33:02.114 "dma_device_type": 1 00:33:02.114 }, 00:33:02.114 { 00:33:02.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.114 "dma_device_type": 2 00:33:02.114 }, 00:33:02.114 { 00:33:02.114 "dma_device_id": "system", 00:33:02.114 "dma_device_type": 1 00:33:02.114 }, 00:33:02.114 { 00:33:02.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.114 "dma_device_type": 2 00:33:02.114 } 00:33:02.114 ], 00:33:02.114 "driver_specific": { 00:33:02.114 "raid": { 00:33:02.114 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:02.114 "strip_size_kb": 0, 00:33:02.114 "state": "online", 00:33:02.114 "raid_level": "raid1", 00:33:02.114 "superblock": true, 00:33:02.114 "num_base_bdevs": 2, 00:33:02.114 "num_base_bdevs_discovered": 2, 00:33:02.114 "num_base_bdevs_operational": 2, 00:33:02.114 "base_bdevs_list": [ 00:33:02.114 { 00:33:02.114 "name": "pt1", 00:33:02.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:02.114 "is_configured": true, 00:33:02.114 "data_offset": 256, 00:33:02.114 "data_size": 7936 00:33:02.114 }, 00:33:02.114 { 00:33:02.114 "name": "pt2", 00:33:02.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:02.114 "is_configured": true, 00:33:02.114 "data_offset": 256, 00:33:02.114 "data_size": 7936 00:33:02.114 } 00:33:02.114 ] 00:33:02.114 } 00:33:02.114 } 00:33:02.114 }' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:02.114 pt2' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.114 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:02.115 [2024-11-20 07:30:26.358442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc1b7301-bdb0-4d1f-8b62-70f27795a11f 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z bc1b7301-bdb0-4d1f-8b62-70f27795a11f ']' 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.115 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.115 [2024-11-20 07:30:26.402216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:02.374 [2024-11-20 07:30:26.402405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:02.374 [2024-11-20 07:30:26.402522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:02.374 [2024-11-20 07:30:26.402654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:02.374 [2024-11-20 07:30:26.402676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.374 [2024-11-20 07:30:26.538275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:02.374 [2024-11-20 07:30:26.541115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:02.374 [2024-11-20 07:30:26.541209] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:02.374 [2024-11-20 07:30:26.541295] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:02.374 [2024-11-20 07:30:26.541337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:02.374 [2024-11-20 07:30:26.541351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:02.374 request: 00:33:02.374 { 00:33:02.374 "name": "raid_bdev1", 00:33:02.374 "raid_level": "raid1", 00:33:02.374 "base_bdevs": [ 00:33:02.374 "malloc1", 00:33:02.374 "malloc2" 00:33:02.374 ], 00:33:02.374 "superblock": false, 00:33:02.374 "method": "bdev_raid_create", 00:33:02.374 "req_id": 1 00:33:02.374 } 00:33:02.374 Got JSON-RPC error response 00:33:02.374 response: 00:33:02.374 { 00:33:02.374 "code": -17, 00:33:02.374 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:02.374 } 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:02.374 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.375 [2024-11-20 07:30:26.594309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:02.375 [2024-11-20 07:30:26.594516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.375 [2024-11-20 07:30:26.594579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:02.375 [2024-11-20 07:30:26.594763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.375 [2024-11-20 07:30:26.597316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.375 [2024-11-20 07:30:26.597500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:02.375 [2024-11-20 07:30:26.597714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:02.375 [2024-11-20 07:30:26.597914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:02.375 pt1 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:02.375 "name": "raid_bdev1", 00:33:02.375 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:02.375 "strip_size_kb": 0, 00:33:02.375 "state": "configuring", 00:33:02.375 "raid_level": "raid1", 00:33:02.375 "superblock": true, 00:33:02.375 "num_base_bdevs": 2, 00:33:02.375 "num_base_bdevs_discovered": 1, 00:33:02.375 "num_base_bdevs_operational": 2, 00:33:02.375 "base_bdevs_list": [ 00:33:02.375 { 00:33:02.375 "name": "pt1", 00:33:02.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:02.375 "is_configured": true, 00:33:02.375 "data_offset": 256, 00:33:02.375 "data_size": 7936 00:33:02.375 }, 00:33:02.375 { 00:33:02.375 "name": null, 00:33:02.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:02.375 "is_configured": false, 00:33:02.375 "data_offset": 256, 00:33:02.375 "data_size": 7936 00:33:02.375 } 00:33:02.375 ] 00:33:02.375 }' 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:02.375 07:30:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.942 [2024-11-20 07:30:27.126606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:02.942 [2024-11-20 07:30:27.126728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.942 [2024-11-20 07:30:27.126761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:02.942 [2024-11-20 07:30:27.126780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.942 [2024-11-20 07:30:27.127099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.942 [2024-11-20 07:30:27.127127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:02.942 [2024-11-20 07:30:27.127192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:02.942 [2024-11-20 07:30:27.127230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:02.942 [2024-11-20 07:30:27.127346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:02.942 [2024-11-20 07:30:27.127368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:02.942 [2024-11-20 07:30:27.127543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:02.942 [2024-11-20 07:30:27.127694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:02.942 [2024-11-20 07:30:27.127709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:02.942 [2024-11-20 07:30:27.127800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:02.942 pt2 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:02.942 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:02.943 "name": "raid_bdev1", 00:33:02.943 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:02.943 "strip_size_kb": 0, 00:33:02.943 "state": "online", 00:33:02.943 "raid_level": "raid1", 00:33:02.943 "superblock": true, 00:33:02.943 "num_base_bdevs": 2, 00:33:02.943 "num_base_bdevs_discovered": 2, 00:33:02.943 "num_base_bdevs_operational": 2, 00:33:02.943 "base_bdevs_list": [ 00:33:02.943 { 00:33:02.943 "name": "pt1", 00:33:02.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:02.943 "is_configured": true, 00:33:02.943 "data_offset": 256, 00:33:02.943 "data_size": 7936 00:33:02.943 }, 00:33:02.943 { 00:33:02.943 "name": "pt2", 00:33:02.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:02.943 "is_configured": true, 00:33:02.943 "data_offset": 256, 00:33:02.943 "data_size": 7936 00:33:02.943 } 00:33:02.943 ] 00:33:02.943 }' 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:02.943 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.511 [2024-11-20 07:30:27.639185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:03.511 "name": "raid_bdev1", 00:33:03.511 "aliases": [ 00:33:03.511 "bc1b7301-bdb0-4d1f-8b62-70f27795a11f" 00:33:03.511 ], 00:33:03.511 "product_name": "Raid Volume", 00:33:03.511 "block_size": 4128, 00:33:03.511 "num_blocks": 7936, 00:33:03.511 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:03.511 "md_size": 32, 00:33:03.511 "md_interleave": true, 00:33:03.511 "dif_type": 0, 00:33:03.511 "assigned_rate_limits": { 00:33:03.511 "rw_ios_per_sec": 0, 00:33:03.511 "rw_mbytes_per_sec": 0, 00:33:03.511 "r_mbytes_per_sec": 0, 00:33:03.511 "w_mbytes_per_sec": 0 00:33:03.511 }, 00:33:03.511 "claimed": false, 00:33:03.511 "zoned": false, 00:33:03.511 "supported_io_types": { 00:33:03.511 "read": true, 00:33:03.511 "write": true, 00:33:03.511 "unmap": false, 00:33:03.511 "flush": false, 00:33:03.511 "reset": true, 00:33:03.511 "nvme_admin": false, 00:33:03.511 "nvme_io": false, 00:33:03.511 "nvme_io_md": false, 00:33:03.511 "write_zeroes": true, 00:33:03.511 "zcopy": false, 00:33:03.511 "get_zone_info": false, 00:33:03.511 "zone_management": false, 00:33:03.511 "zone_append": false, 00:33:03.511 "compare": false, 00:33:03.511 "compare_and_write": false, 00:33:03.511 "abort": false, 00:33:03.511 "seek_hole": false, 00:33:03.511 "seek_data": false, 00:33:03.511 "copy": false, 00:33:03.511 "nvme_iov_md": false 00:33:03.511 }, 00:33:03.511 "memory_domains": [ 00:33:03.511 { 00:33:03.511 "dma_device_id": "system", 00:33:03.511 "dma_device_type": 1 00:33:03.511 }, 00:33:03.511 { 00:33:03.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.511 "dma_device_type": 2 00:33:03.511 }, 00:33:03.511 { 00:33:03.511 "dma_device_id": "system", 00:33:03.511 "dma_device_type": 1 00:33:03.511 }, 00:33:03.511 { 00:33:03.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.511 "dma_device_type": 2 00:33:03.511 } 00:33:03.511 ], 00:33:03.511 "driver_specific": { 00:33:03.511 "raid": { 00:33:03.511 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:03.511 "strip_size_kb": 0, 00:33:03.511 "state": "online", 00:33:03.511 "raid_level": "raid1", 00:33:03.511 "superblock": true, 00:33:03.511 "num_base_bdevs": 2, 00:33:03.511 "num_base_bdevs_discovered": 2, 00:33:03.511 "num_base_bdevs_operational": 2, 00:33:03.511 "base_bdevs_list": [ 00:33:03.511 { 00:33:03.511 "name": "pt1", 00:33:03.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:03.511 "is_configured": true, 00:33:03.511 "data_offset": 256, 00:33:03.511 "data_size": 7936 00:33:03.511 }, 00:33:03.511 { 00:33:03.511 "name": "pt2", 00:33:03.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:03.511 "is_configured": true, 00:33:03.511 "data_offset": 256, 00:33:03.511 "data_size": 7936 00:33:03.511 } 00:33:03.511 ] 00:33:03.511 } 00:33:03.511 } 00:33:03.511 }' 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:03.511 pt2' 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.511 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.770 [2024-11-20 07:30:27.915240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' bc1b7301-bdb0-4d1f-8b62-70f27795a11f '!=' bc1b7301-bdb0-4d1f-8b62-70f27795a11f ']' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.770 [2024-11-20 07:30:27.959010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.770 07:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.770 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.770 "name": "raid_bdev1", 00:33:03.770 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:03.770 "strip_size_kb": 0, 00:33:03.770 "state": "online", 00:33:03.770 "raid_level": "raid1", 00:33:03.770 "superblock": true, 00:33:03.770 "num_base_bdevs": 2, 00:33:03.770 "num_base_bdevs_discovered": 1, 00:33:03.770 "num_base_bdevs_operational": 1, 00:33:03.770 "base_bdevs_list": [ 00:33:03.770 { 00:33:03.770 "name": null, 00:33:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.770 "is_configured": false, 00:33:03.770 "data_offset": 0, 00:33:03.770 "data_size": 7936 00:33:03.770 }, 00:33:03.770 { 00:33:03.770 "name": "pt2", 00:33:03.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:03.770 "is_configured": true, 00:33:03.770 "data_offset": 256, 00:33:03.770 "data_size": 7936 00:33:03.770 } 00:33:03.770 ] 00:33:03.770 }' 00:33:03.770 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.770 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 [2024-11-20 07:30:28.483337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:04.339 [2024-11-20 07:30:28.483476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:04.339 [2024-11-20 07:30:28.483629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:04.339 [2024-11-20 07:30:28.483739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:04.339 [2024-11-20 07:30:28.483763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 [2024-11-20 07:30:28.555246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:04.339 [2024-11-20 07:30:28.555337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.339 [2024-11-20 07:30:28.555428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:04.339 [2024-11-20 07:30:28.555461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.339 [2024-11-20 07:30:28.558416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.339 [2024-11-20 07:30:28.558463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:04.339 [2024-11-20 07:30:28.558567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:04.339 [2024-11-20 07:30:28.558702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:04.339 [2024-11-20 07:30:28.558794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:04.339 [2024-11-20 07:30:28.558815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:04.339 [2024-11-20 07:30:28.558916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:04.339 [2024-11-20 07:30:28.559022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:04.339 [2024-11-20 07:30:28.559087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:04.339 pt2 00:33:04.339 [2024-11-20 07:30:28.559211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.339 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.339 "name": "raid_bdev1", 00:33:04.339 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:04.339 "strip_size_kb": 0, 00:33:04.339 "state": "online", 00:33:04.339 "raid_level": "raid1", 00:33:04.339 "superblock": true, 00:33:04.339 "num_base_bdevs": 2, 00:33:04.339 "num_base_bdevs_discovered": 1, 00:33:04.339 "num_base_bdevs_operational": 1, 00:33:04.339 "base_bdevs_list": [ 00:33:04.339 { 00:33:04.339 "name": null, 00:33:04.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.339 "is_configured": false, 00:33:04.339 "data_offset": 256, 00:33:04.339 "data_size": 7936 00:33:04.339 }, 00:33:04.339 { 00:33:04.340 "name": "pt2", 00:33:04.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:04.340 "is_configured": true, 00:33:04.340 "data_offset": 256, 00:33:04.340 "data_size": 7936 00:33:04.340 } 00:33:04.340 ] 00:33:04.340 }' 00:33:04.340 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.340 07:30:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.907 [2024-11-20 07:30:29.059494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:04.907 [2024-11-20 07:30:29.059884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:04.907 [2024-11-20 07:30:29.060058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:04.907 [2024-11-20 07:30:29.060145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:04.907 [2024-11-20 07:30:29.060163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:04.907 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.908 [2024-11-20 07:30:29.119567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:04.908 [2024-11-20 07:30:29.119766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.908 [2024-11-20 07:30:29.119808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:04.908 [2024-11-20 07:30:29.119824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.908 [2024-11-20 07:30:29.122911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.908 [2024-11-20 07:30:29.122955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:04.908 [2024-11-20 07:30:29.123080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:04.908 [2024-11-20 07:30:29.123150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:04.908 [2024-11-20 07:30:29.123295] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:04.908 [2024-11-20 07:30:29.123414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:04.908 [2024-11-20 07:30:29.123450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:04.908 [2024-11-20 07:30:29.123526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:04.908 [2024-11-20 07:30:29.123712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:04.908 [2024-11-20 07:30:29.123727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:04.908 [2024-11-20 07:30:29.123814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:04.908 [2024-11-20 07:30:29.123919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:04.908 [2024-11-20 07:30:29.123937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:04.908 [2024-11-20 07:30:29.124117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:04.908 pt1 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.908 "name": "raid_bdev1", 00:33:04.908 "uuid": "bc1b7301-bdb0-4d1f-8b62-70f27795a11f", 00:33:04.908 "strip_size_kb": 0, 00:33:04.908 "state": "online", 00:33:04.908 "raid_level": "raid1", 00:33:04.908 "superblock": true, 00:33:04.908 "num_base_bdevs": 2, 00:33:04.908 "num_base_bdevs_discovered": 1, 00:33:04.908 "num_base_bdevs_operational": 1, 00:33:04.908 "base_bdevs_list": [ 00:33:04.908 { 00:33:04.908 "name": null, 00:33:04.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.908 "is_configured": false, 00:33:04.908 "data_offset": 256, 00:33:04.908 "data_size": 7936 00:33:04.908 }, 00:33:04.908 { 00:33:04.908 "name": "pt2", 00:33:04.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:04.908 "is_configured": true, 00:33:04.908 "data_offset": 256, 00:33:04.908 "data_size": 7936 00:33:04.908 } 00:33:04.908 ] 00:33:04.908 }' 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.908 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.482 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.741 [2024-11-20 07:30:29.772581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' bc1b7301-bdb0-4d1f-8b62-70f27795a11f '!=' bc1b7301-bdb0-4d1f-8b62-70f27795a11f ']' 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89350 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89350 ']' 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89350 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89350 00:33:05.741 killing process with pid 89350 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89350' 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89350 00:33:05.741 07:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89350 00:33:05.741 [2024-11-20 07:30:29.850659] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:05.741 [2024-11-20 07:30:29.850834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.741 [2024-11-20 07:30:29.850964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.741 [2024-11-20 07:30:29.851001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:06.000 [2024-11-20 07:30:30.051243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:06.935 07:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:33:06.935 00:33:06.935 real 0m6.759s 00:33:06.935 user 0m10.605s 00:33:06.935 sys 0m0.981s 00:33:06.935 07:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.935 07:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.935 ************************************ 00:33:06.935 END TEST raid_superblock_test_md_interleaved 00:33:06.935 ************************************ 00:33:07.194 07:30:31 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:07.194 07:30:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:07.194 07:30:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.194 07:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:07.194 ************************************ 00:33:07.194 START TEST raid_rebuild_test_sb_md_interleaved 00:33:07.194 ************************************ 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:07.194 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89674 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89674 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89674 ']' 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.195 07:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.195 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:07.195 Zero copy mechanism will not be used. 00:33:07.195 [2024-11-20 07:30:31.372412] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:07.195 [2024-11-20 07:30:31.372566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89674 ] 00:33:07.453 [2024-11-20 07:30:31.546335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.453 [2024-11-20 07:30:31.693098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.711 [2024-11-20 07:30:31.916037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:07.711 [2024-11-20 07:30:31.916122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 BaseBdev1_malloc 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 [2024-11-20 07:30:32.388104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:08.278 [2024-11-20 07:30:32.388205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.278 [2024-11-20 07:30:32.388239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:08.278 [2024-11-20 07:30:32.388259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.278 [2024-11-20 07:30:32.391136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.278 [2024-11-20 07:30:32.391193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:08.278 BaseBdev1 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 BaseBdev2_malloc 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 [2024-11-20 07:30:32.444233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:08.278 [2024-11-20 07:30:32.444562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.278 [2024-11-20 07:30:32.444620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:08.278 [2024-11-20 07:30:32.444646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.278 [2024-11-20 07:30:32.447497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.278 [2024-11-20 07:30:32.447703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:08.278 BaseBdev2 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 spare_malloc 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 spare_delay 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 [2024-11-20 07:30:32.526081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:08.278 [2024-11-20 07:30:32.526223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.278 [2024-11-20 07:30:32.526254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:08.278 [2024-11-20 07:30:32.526274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.278 [2024-11-20 07:30:32.529173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.278 [2024-11-20 07:30:32.529222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:08.278 spare 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 [2024-11-20 07:30:32.534229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:08.278 [2024-11-20 07:30:32.537183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:08.278 [2024-11-20 07:30:32.537442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:08.278 [2024-11-20 07:30:32.537466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:08.278 [2024-11-20 07:30:32.537572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:08.278 [2024-11-20 07:30:32.537727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:08.278 [2024-11-20 07:30:32.537742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:08.278 [2024-11-20 07:30:32.537851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.278 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.536 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.536 "name": "raid_bdev1", 00:33:08.536 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:08.536 "strip_size_kb": 0, 00:33:08.536 "state": "online", 00:33:08.536 "raid_level": "raid1", 00:33:08.536 "superblock": true, 00:33:08.536 "num_base_bdevs": 2, 00:33:08.536 "num_base_bdevs_discovered": 2, 00:33:08.536 "num_base_bdevs_operational": 2, 00:33:08.536 "base_bdevs_list": [ 00:33:08.536 { 00:33:08.536 "name": "BaseBdev1", 00:33:08.536 "uuid": "4fb212d3-c83d-5963-b0f2-d47047233ca6", 00:33:08.536 "is_configured": true, 00:33:08.536 "data_offset": 256, 00:33:08.536 "data_size": 7936 00:33:08.536 }, 00:33:08.536 { 00:33:08.536 "name": "BaseBdev2", 00:33:08.536 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:08.536 "is_configured": true, 00:33:08.536 "data_offset": 256, 00:33:08.536 "data_size": 7936 00:33:08.536 } 00:33:08.536 ] 00:33:08.536 }' 00:33:08.536 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.536 07:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.794 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:08.794 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:08.794 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.794 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.794 [2024-11-20 07:30:33.070845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:09.052 [2024-11-20 07:30:33.174417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.052 "name": "raid_bdev1", 00:33:09.052 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:09.052 "strip_size_kb": 0, 00:33:09.052 "state": "online", 00:33:09.052 "raid_level": "raid1", 00:33:09.052 "superblock": true, 00:33:09.052 "num_base_bdevs": 2, 00:33:09.052 "num_base_bdevs_discovered": 1, 00:33:09.052 "num_base_bdevs_operational": 1, 00:33:09.052 "base_bdevs_list": [ 00:33:09.052 { 00:33:09.052 "name": null, 00:33:09.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.052 "is_configured": false, 00:33:09.052 "data_offset": 0, 00:33:09.052 "data_size": 7936 00:33:09.052 }, 00:33:09.052 { 00:33:09.052 "name": "BaseBdev2", 00:33:09.052 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:09.052 "is_configured": true, 00:33:09.052 "data_offset": 256, 00:33:09.052 "data_size": 7936 00:33:09.052 } 00:33:09.052 ] 00:33:09.052 }' 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.052 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:09.619 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:09.619 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.619 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:09.619 [2024-11-20 07:30:33.718671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:09.619 [2024-11-20 07:30:33.735809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:09.619 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.619 07:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:09.619 [2024-11-20 07:30:33.738390] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:10.556 "name": "raid_bdev1", 00:33:10.556 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:10.556 "strip_size_kb": 0, 00:33:10.556 "state": "online", 00:33:10.556 "raid_level": "raid1", 00:33:10.556 "superblock": true, 00:33:10.556 "num_base_bdevs": 2, 00:33:10.556 "num_base_bdevs_discovered": 2, 00:33:10.556 "num_base_bdevs_operational": 2, 00:33:10.556 "process": { 00:33:10.556 "type": "rebuild", 00:33:10.556 "target": "spare", 00:33:10.556 "progress": { 00:33:10.556 "blocks": 2560, 00:33:10.556 "percent": 32 00:33:10.556 } 00:33:10.556 }, 00:33:10.556 "base_bdevs_list": [ 00:33:10.556 { 00:33:10.556 "name": "spare", 00:33:10.556 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:10.556 "is_configured": true, 00:33:10.556 "data_offset": 256, 00:33:10.556 "data_size": 7936 00:33:10.556 }, 00:33:10.556 { 00:33:10.556 "name": "BaseBdev2", 00:33:10.556 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:10.556 "is_configured": true, 00:33:10.556 "data_offset": 256, 00:33:10.556 "data_size": 7936 00:33:10.556 } 00:33:10.556 ] 00:33:10.556 }' 00:33:10.556 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.815 [2024-11-20 07:30:34.923643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:10.815 [2024-11-20 07:30:34.947590] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:10.815 [2024-11-20 07:30:34.947930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:10.815 [2024-11-20 07:30:34.947960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:10.815 [2024-11-20 07:30:34.947980] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:10.815 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.816 07:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.816 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.816 "name": "raid_bdev1", 00:33:10.816 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:10.816 "strip_size_kb": 0, 00:33:10.816 "state": "online", 00:33:10.816 "raid_level": "raid1", 00:33:10.816 "superblock": true, 00:33:10.816 "num_base_bdevs": 2, 00:33:10.816 "num_base_bdevs_discovered": 1, 00:33:10.816 "num_base_bdevs_operational": 1, 00:33:10.816 "base_bdevs_list": [ 00:33:10.816 { 00:33:10.816 "name": null, 00:33:10.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.816 "is_configured": false, 00:33:10.816 "data_offset": 0, 00:33:10.816 "data_size": 7936 00:33:10.816 }, 00:33:10.816 { 00:33:10.816 "name": "BaseBdev2", 00:33:10.816 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:10.816 "is_configured": true, 00:33:10.816 "data_offset": 256, 00:33:10.816 "data_size": 7936 00:33:10.816 } 00:33:10.816 ] 00:33:10.816 }' 00:33:10.816 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.816 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:11.385 "name": "raid_bdev1", 00:33:11.385 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:11.385 "strip_size_kb": 0, 00:33:11.385 "state": "online", 00:33:11.385 "raid_level": "raid1", 00:33:11.385 "superblock": true, 00:33:11.385 "num_base_bdevs": 2, 00:33:11.385 "num_base_bdevs_discovered": 1, 00:33:11.385 "num_base_bdevs_operational": 1, 00:33:11.385 "base_bdevs_list": [ 00:33:11.385 { 00:33:11.385 "name": null, 00:33:11.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.385 "is_configured": false, 00:33:11.385 "data_offset": 0, 00:33:11.385 "data_size": 7936 00:33:11.385 }, 00:33:11.385 { 00:33:11.385 "name": "BaseBdev2", 00:33:11.385 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:11.385 "is_configured": true, 00:33:11.385 "data_offset": 256, 00:33:11.385 "data_size": 7936 00:33:11.385 } 00:33:11.385 ] 00:33:11.385 }' 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:11.385 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:11.645 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:11.645 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:11.645 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.645 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.645 [2024-11-20 07:30:35.692255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:11.645 [2024-11-20 07:30:35.707992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:11.645 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.645 07:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:11.645 [2024-11-20 07:30:35.710824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:12.582 "name": "raid_bdev1", 00:33:12.582 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:12.582 "strip_size_kb": 0, 00:33:12.582 "state": "online", 00:33:12.582 "raid_level": "raid1", 00:33:12.582 "superblock": true, 00:33:12.582 "num_base_bdevs": 2, 00:33:12.582 "num_base_bdevs_discovered": 2, 00:33:12.582 "num_base_bdevs_operational": 2, 00:33:12.582 "process": { 00:33:12.582 "type": "rebuild", 00:33:12.582 "target": "spare", 00:33:12.582 "progress": { 00:33:12.582 "blocks": 2560, 00:33:12.582 "percent": 32 00:33:12.582 } 00:33:12.582 }, 00:33:12.582 "base_bdevs_list": [ 00:33:12.582 { 00:33:12.582 "name": "spare", 00:33:12.582 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:12.582 "is_configured": true, 00:33:12.582 "data_offset": 256, 00:33:12.582 "data_size": 7936 00:33:12.582 }, 00:33:12.582 { 00:33:12.582 "name": "BaseBdev2", 00:33:12.582 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:12.582 "is_configured": true, 00:33:12.582 "data_offset": 256, 00:33:12.582 "data_size": 7936 00:33:12.582 } 00:33:12.582 ] 00:33:12.582 }' 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:12.582 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:12.842 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=800 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.842 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:12.842 "name": "raid_bdev1", 00:33:12.842 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:12.842 "strip_size_kb": 0, 00:33:12.842 "state": "online", 00:33:12.842 "raid_level": "raid1", 00:33:12.842 "superblock": true, 00:33:12.842 "num_base_bdevs": 2, 00:33:12.842 "num_base_bdevs_discovered": 2, 00:33:12.842 "num_base_bdevs_operational": 2, 00:33:12.842 "process": { 00:33:12.842 "type": "rebuild", 00:33:12.842 "target": "spare", 00:33:12.842 "progress": { 00:33:12.842 "blocks": 2816, 00:33:12.842 "percent": 35 00:33:12.842 } 00:33:12.842 }, 00:33:12.842 "base_bdevs_list": [ 00:33:12.842 { 00:33:12.842 "name": "spare", 00:33:12.842 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:12.842 "is_configured": true, 00:33:12.842 "data_offset": 256, 00:33:12.842 "data_size": 7936 00:33:12.842 }, 00:33:12.842 { 00:33:12.842 "name": "BaseBdev2", 00:33:12.842 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:12.843 "is_configured": true, 00:33:12.843 "data_offset": 256, 00:33:12.843 "data_size": 7936 00:33:12.843 } 00:33:12.843 ] 00:33:12.843 }' 00:33:12.843 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:12.843 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:12.843 07:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:12.843 07:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:12.843 07:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:13.779 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.039 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:14.039 "name": "raid_bdev1", 00:33:14.039 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:14.039 "strip_size_kb": 0, 00:33:14.039 "state": "online", 00:33:14.039 "raid_level": "raid1", 00:33:14.039 "superblock": true, 00:33:14.039 "num_base_bdevs": 2, 00:33:14.039 "num_base_bdevs_discovered": 2, 00:33:14.039 "num_base_bdevs_operational": 2, 00:33:14.039 "process": { 00:33:14.039 "type": "rebuild", 00:33:14.039 "target": "spare", 00:33:14.039 "progress": { 00:33:14.039 "blocks": 5888, 00:33:14.039 "percent": 74 00:33:14.039 } 00:33:14.039 }, 00:33:14.039 "base_bdevs_list": [ 00:33:14.039 { 00:33:14.039 "name": "spare", 00:33:14.039 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:14.039 "is_configured": true, 00:33:14.039 "data_offset": 256, 00:33:14.039 "data_size": 7936 00:33:14.039 }, 00:33:14.039 { 00:33:14.039 "name": "BaseBdev2", 00:33:14.039 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:14.039 "is_configured": true, 00:33:14.039 "data_offset": 256, 00:33:14.039 "data_size": 7936 00:33:14.039 } 00:33:14.039 ] 00:33:14.039 }' 00:33:14.039 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:14.039 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:14.039 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:14.039 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:14.039 07:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:14.606 [2024-11-20 07:30:38.833504] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:14.606 [2024-11-20 07:30:38.833637] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:14.606 [2024-11-20 07:30:38.833914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.171 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:15.171 "name": "raid_bdev1", 00:33:15.172 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:15.172 "strip_size_kb": 0, 00:33:15.172 "state": "online", 00:33:15.172 "raid_level": "raid1", 00:33:15.172 "superblock": true, 00:33:15.172 "num_base_bdevs": 2, 00:33:15.172 "num_base_bdevs_discovered": 2, 00:33:15.172 "num_base_bdevs_operational": 2, 00:33:15.172 "base_bdevs_list": [ 00:33:15.172 { 00:33:15.172 "name": "spare", 00:33:15.172 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:15.172 "is_configured": true, 00:33:15.172 "data_offset": 256, 00:33:15.172 "data_size": 7936 00:33:15.172 }, 00:33:15.172 { 00:33:15.172 "name": "BaseBdev2", 00:33:15.172 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:15.172 "is_configured": true, 00:33:15.172 "data_offset": 256, 00:33:15.172 "data_size": 7936 00:33:15.172 } 00:33:15.172 ] 00:33:15.172 }' 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:15.172 "name": "raid_bdev1", 00:33:15.172 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:15.172 "strip_size_kb": 0, 00:33:15.172 "state": "online", 00:33:15.172 "raid_level": "raid1", 00:33:15.172 "superblock": true, 00:33:15.172 "num_base_bdevs": 2, 00:33:15.172 "num_base_bdevs_discovered": 2, 00:33:15.172 "num_base_bdevs_operational": 2, 00:33:15.172 "base_bdevs_list": [ 00:33:15.172 { 00:33:15.172 "name": "spare", 00:33:15.172 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:15.172 "is_configured": true, 00:33:15.172 "data_offset": 256, 00:33:15.172 "data_size": 7936 00:33:15.172 }, 00:33:15.172 { 00:33:15.172 "name": "BaseBdev2", 00:33:15.172 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:15.172 "is_configured": true, 00:33:15.172 "data_offset": 256, 00:33:15.172 "data_size": 7936 00:33:15.172 } 00:33:15.172 ] 00:33:15.172 }' 00:33:15.172 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.430 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.430 "name": "raid_bdev1", 00:33:15.430 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:15.430 "strip_size_kb": 0, 00:33:15.430 "state": "online", 00:33:15.430 "raid_level": "raid1", 00:33:15.430 "superblock": true, 00:33:15.430 "num_base_bdevs": 2, 00:33:15.430 "num_base_bdevs_discovered": 2, 00:33:15.430 "num_base_bdevs_operational": 2, 00:33:15.430 "base_bdevs_list": [ 00:33:15.430 { 00:33:15.430 "name": "spare", 00:33:15.430 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:15.430 "is_configured": true, 00:33:15.430 "data_offset": 256, 00:33:15.430 "data_size": 7936 00:33:15.430 }, 00:33:15.430 { 00:33:15.430 "name": "BaseBdev2", 00:33:15.431 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:15.431 "is_configured": true, 00:33:15.431 "data_offset": 256, 00:33:15.431 "data_size": 7936 00:33:15.431 } 00:33:15.431 ] 00:33:15.431 }' 00:33:15.431 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.431 07:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 [2024-11-20 07:30:40.048354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:16.001 [2024-11-20 07:30:40.048560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:16.001 [2024-11-20 07:30:40.048815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:16.001 [2024-11-20 07:30:40.048920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:16.001 [2024-11-20 07:30:40.048938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 [2024-11-20 07:30:40.116364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:16.001 [2024-11-20 07:30:40.116438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:16.001 [2024-11-20 07:30:40.116468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:16.001 [2024-11-20 07:30:40.116484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:16.001 [2024-11-20 07:30:40.119158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:16.001 [2024-11-20 07:30:40.119205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:16.001 [2024-11-20 07:30:40.119282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:16.001 [2024-11-20 07:30:40.119395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:16.001 [2024-11-20 07:30:40.119536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:16.001 spare 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 [2024-11-20 07:30:40.219696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:16.001 [2024-11-20 07:30:40.219740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:16.001 [2024-11-20 07:30:40.219863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:33:16.001 [2024-11-20 07:30:40.220001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:16.001 [2024-11-20 07:30:40.220016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:16.001 [2024-11-20 07:30:40.220127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.001 "name": "raid_bdev1", 00:33:16.001 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:16.001 "strip_size_kb": 0, 00:33:16.001 "state": "online", 00:33:16.001 "raid_level": "raid1", 00:33:16.001 "superblock": true, 00:33:16.001 "num_base_bdevs": 2, 00:33:16.001 "num_base_bdevs_discovered": 2, 00:33:16.001 "num_base_bdevs_operational": 2, 00:33:16.001 "base_bdevs_list": [ 00:33:16.001 { 00:33:16.001 "name": "spare", 00:33:16.001 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:16.001 "is_configured": true, 00:33:16.001 "data_offset": 256, 00:33:16.001 "data_size": 7936 00:33:16.001 }, 00:33:16.001 { 00:33:16.001 "name": "BaseBdev2", 00:33:16.001 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:16.001 "is_configured": true, 00:33:16.001 "data_offset": 256, 00:33:16.001 "data_size": 7936 00:33:16.001 } 00:33:16.001 ] 00:33:16.001 }' 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.001 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:16.577 "name": "raid_bdev1", 00:33:16.577 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:16.577 "strip_size_kb": 0, 00:33:16.577 "state": "online", 00:33:16.577 "raid_level": "raid1", 00:33:16.577 "superblock": true, 00:33:16.577 "num_base_bdevs": 2, 00:33:16.577 "num_base_bdevs_discovered": 2, 00:33:16.577 "num_base_bdevs_operational": 2, 00:33:16.577 "base_bdevs_list": [ 00:33:16.577 { 00:33:16.577 "name": "spare", 00:33:16.577 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:16.577 "is_configured": true, 00:33:16.577 "data_offset": 256, 00:33:16.577 "data_size": 7936 00:33:16.577 }, 00:33:16.577 { 00:33:16.577 "name": "BaseBdev2", 00:33:16.577 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:16.577 "is_configured": true, 00:33:16.577 "data_offset": 256, 00:33:16.577 "data_size": 7936 00:33:16.577 } 00:33:16.577 ] 00:33:16.577 }' 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:16.577 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:16.835 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.836 [2024-11-20 07:30:40.980794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.836 07:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.836 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.836 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.836 "name": "raid_bdev1", 00:33:16.836 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:16.836 "strip_size_kb": 0, 00:33:16.836 "state": "online", 00:33:16.836 "raid_level": "raid1", 00:33:16.836 "superblock": true, 00:33:16.836 "num_base_bdevs": 2, 00:33:16.836 "num_base_bdevs_discovered": 1, 00:33:16.836 "num_base_bdevs_operational": 1, 00:33:16.836 "base_bdevs_list": [ 00:33:16.836 { 00:33:16.836 "name": null, 00:33:16.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.836 "is_configured": false, 00:33:16.836 "data_offset": 0, 00:33:16.836 "data_size": 7936 00:33:16.836 }, 00:33:16.836 { 00:33:16.836 "name": "BaseBdev2", 00:33:16.836 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:16.836 "is_configured": true, 00:33:16.836 "data_offset": 256, 00:33:16.836 "data_size": 7936 00:33:16.836 } 00:33:16.836 ] 00:33:16.836 }' 00:33:16.836 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.836 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.402 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:17.402 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.402 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.402 [2024-11-20 07:30:41.508996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:17.402 [2024-11-20 07:30:41.509275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:17.402 [2024-11-20 07:30:41.509299] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:17.403 [2024-11-20 07:30:41.509366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:17.403 [2024-11-20 07:30:41.524636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:17.403 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.403 07:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:17.403 [2024-11-20 07:30:41.527173] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:18.339 "name": "raid_bdev1", 00:33:18.339 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:18.339 "strip_size_kb": 0, 00:33:18.339 "state": "online", 00:33:18.339 "raid_level": "raid1", 00:33:18.339 "superblock": true, 00:33:18.339 "num_base_bdevs": 2, 00:33:18.339 "num_base_bdevs_discovered": 2, 00:33:18.339 "num_base_bdevs_operational": 2, 00:33:18.339 "process": { 00:33:18.339 "type": "rebuild", 00:33:18.339 "target": "spare", 00:33:18.339 "progress": { 00:33:18.339 "blocks": 2560, 00:33:18.339 "percent": 32 00:33:18.339 } 00:33:18.339 }, 00:33:18.339 "base_bdevs_list": [ 00:33:18.339 { 00:33:18.339 "name": "spare", 00:33:18.339 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:18.339 "is_configured": true, 00:33:18.339 "data_offset": 256, 00:33:18.339 "data_size": 7936 00:33:18.339 }, 00:33:18.339 { 00:33:18.339 "name": "BaseBdev2", 00:33:18.339 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:18.339 "is_configured": true, 00:33:18.339 "data_offset": 256, 00:33:18.339 "data_size": 7936 00:33:18.339 } 00:33:18.339 ] 00:33:18.339 }' 00:33:18.339 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.598 [2024-11-20 07:30:42.700510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:18.598 [2024-11-20 07:30:42.735935] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:18.598 [2024-11-20 07:30:42.736059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:18.598 [2024-11-20 07:30:42.736083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:18.598 [2024-11-20 07:30:42.736098] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.598 "name": "raid_bdev1", 00:33:18.598 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:18.598 "strip_size_kb": 0, 00:33:18.598 "state": "online", 00:33:18.598 "raid_level": "raid1", 00:33:18.598 "superblock": true, 00:33:18.598 "num_base_bdevs": 2, 00:33:18.598 "num_base_bdevs_discovered": 1, 00:33:18.598 "num_base_bdevs_operational": 1, 00:33:18.598 "base_bdevs_list": [ 00:33:18.598 { 00:33:18.598 "name": null, 00:33:18.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.598 "is_configured": false, 00:33:18.598 "data_offset": 0, 00:33:18.598 "data_size": 7936 00:33:18.598 }, 00:33:18.598 { 00:33:18.598 "name": "BaseBdev2", 00:33:18.598 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:18.598 "is_configured": true, 00:33:18.598 "data_offset": 256, 00:33:18.598 "data_size": 7936 00:33:18.598 } 00:33:18.598 ] 00:33:18.598 }' 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.598 07:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.165 07:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:19.165 07:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.165 07:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.165 [2024-11-20 07:30:43.297004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:19.165 [2024-11-20 07:30:43.297117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.165 [2024-11-20 07:30:43.297152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:19.165 [2024-11-20 07:30:43.297169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.165 [2024-11-20 07:30:43.297424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.165 [2024-11-20 07:30:43.297455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:19.165 [2024-11-20 07:30:43.297530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:19.165 [2024-11-20 07:30:43.297552] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:19.165 [2024-11-20 07:30:43.297566] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:19.165 [2024-11-20 07:30:43.297634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:19.165 [2024-11-20 07:30:43.313553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:19.165 spare 00:33:19.165 07:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.165 07:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:19.165 [2024-11-20 07:30:43.316167] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:20.099 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:20.100 "name": "raid_bdev1", 00:33:20.100 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:20.100 "strip_size_kb": 0, 00:33:20.100 "state": "online", 00:33:20.100 "raid_level": "raid1", 00:33:20.100 "superblock": true, 00:33:20.100 "num_base_bdevs": 2, 00:33:20.100 "num_base_bdevs_discovered": 2, 00:33:20.100 "num_base_bdevs_operational": 2, 00:33:20.100 "process": { 00:33:20.100 "type": "rebuild", 00:33:20.100 "target": "spare", 00:33:20.100 "progress": { 00:33:20.100 "blocks": 2560, 00:33:20.100 "percent": 32 00:33:20.100 } 00:33:20.100 }, 00:33:20.100 "base_bdevs_list": [ 00:33:20.100 { 00:33:20.100 "name": "spare", 00:33:20.100 "uuid": "ffc09861-204e-582a-8a6c-435ecde928a8", 00:33:20.100 "is_configured": true, 00:33:20.100 "data_offset": 256, 00:33:20.100 "data_size": 7936 00:33:20.100 }, 00:33:20.100 { 00:33:20.100 "name": "BaseBdev2", 00:33:20.100 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:20.100 "is_configured": true, 00:33:20.100 "data_offset": 256, 00:33:20.100 "data_size": 7936 00:33:20.100 } 00:33:20.100 ] 00:33:20.100 }' 00:33:20.100 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.359 [2024-11-20 07:30:44.481138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:20.359 [2024-11-20 07:30:44.524835] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:20.359 [2024-11-20 07:30:44.524921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.359 [2024-11-20 07:30:44.524946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:20.359 [2024-11-20 07:30:44.524957] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.359 "name": "raid_bdev1", 00:33:20.359 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:20.359 "strip_size_kb": 0, 00:33:20.359 "state": "online", 00:33:20.359 "raid_level": "raid1", 00:33:20.359 "superblock": true, 00:33:20.359 "num_base_bdevs": 2, 00:33:20.359 "num_base_bdevs_discovered": 1, 00:33:20.359 "num_base_bdevs_operational": 1, 00:33:20.359 "base_bdevs_list": [ 00:33:20.359 { 00:33:20.359 "name": null, 00:33:20.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.359 "is_configured": false, 00:33:20.359 "data_offset": 0, 00:33:20.359 "data_size": 7936 00:33:20.359 }, 00:33:20.359 { 00:33:20.359 "name": "BaseBdev2", 00:33:20.359 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:20.359 "is_configured": true, 00:33:20.359 "data_offset": 256, 00:33:20.359 "data_size": 7936 00:33:20.359 } 00:33:20.359 ] 00:33:20.359 }' 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.359 07:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:20.928 "name": "raid_bdev1", 00:33:20.928 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:20.928 "strip_size_kb": 0, 00:33:20.928 "state": "online", 00:33:20.928 "raid_level": "raid1", 00:33:20.928 "superblock": true, 00:33:20.928 "num_base_bdevs": 2, 00:33:20.928 "num_base_bdevs_discovered": 1, 00:33:20.928 "num_base_bdevs_operational": 1, 00:33:20.928 "base_bdevs_list": [ 00:33:20.928 { 00:33:20.928 "name": null, 00:33:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.928 "is_configured": false, 00:33:20.928 "data_offset": 0, 00:33:20.928 "data_size": 7936 00:33:20.928 }, 00:33:20.928 { 00:33:20.928 "name": "BaseBdev2", 00:33:20.928 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:20.928 "is_configured": true, 00:33:20.928 "data_offset": 256, 00:33:20.928 "data_size": 7936 00:33:20.928 } 00:33:20.928 ] 00:33:20.928 }' 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:20.928 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.187 [2024-11-20 07:30:45.278943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:21.187 [2024-11-20 07:30:45.279025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:21.187 [2024-11-20 07:30:45.279086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:21.187 [2024-11-20 07:30:45.279103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:21.187 [2024-11-20 07:30:45.279320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:21.187 [2024-11-20 07:30:45.279369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:21.187 [2024-11-20 07:30:45.279448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:21.187 [2024-11-20 07:30:45.279467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:21.187 [2024-11-20 07:30:45.279492] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:21.187 [2024-11-20 07:30:45.279504] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:21.187 BaseBdev1 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.187 07:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:22.123 "name": "raid_bdev1", 00:33:22.123 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:22.123 "strip_size_kb": 0, 00:33:22.123 "state": "online", 00:33:22.123 "raid_level": "raid1", 00:33:22.123 "superblock": true, 00:33:22.123 "num_base_bdevs": 2, 00:33:22.123 "num_base_bdevs_discovered": 1, 00:33:22.123 "num_base_bdevs_operational": 1, 00:33:22.123 "base_bdevs_list": [ 00:33:22.123 { 00:33:22.123 "name": null, 00:33:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.123 "is_configured": false, 00:33:22.123 "data_offset": 0, 00:33:22.123 "data_size": 7936 00:33:22.123 }, 00:33:22.123 { 00:33:22.123 "name": "BaseBdev2", 00:33:22.123 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:22.123 "is_configured": true, 00:33:22.123 "data_offset": 256, 00:33:22.123 "data_size": 7936 00:33:22.123 } 00:33:22.123 ] 00:33:22.123 }' 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:22.123 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:22.701 "name": "raid_bdev1", 00:33:22.701 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:22.701 "strip_size_kb": 0, 00:33:22.701 "state": "online", 00:33:22.701 "raid_level": "raid1", 00:33:22.701 "superblock": true, 00:33:22.701 "num_base_bdevs": 2, 00:33:22.701 "num_base_bdevs_discovered": 1, 00:33:22.701 "num_base_bdevs_operational": 1, 00:33:22.701 "base_bdevs_list": [ 00:33:22.701 { 00:33:22.701 "name": null, 00:33:22.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.701 "is_configured": false, 00:33:22.701 "data_offset": 0, 00:33:22.701 "data_size": 7936 00:33:22.701 }, 00:33:22.701 { 00:33:22.701 "name": "BaseBdev2", 00:33:22.701 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:22.701 "is_configured": true, 00:33:22.701 "data_offset": 256, 00:33:22.701 "data_size": 7936 00:33:22.701 } 00:33:22.701 ] 00:33:22.701 }' 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.701 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:22.976 [2024-11-20 07:30:46.983689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:22.976 [2024-11-20 07:30:46.983897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:22.976 [2024-11-20 07:30:46.983970] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:22.976 request: 00:33:22.976 { 00:33:22.976 "base_bdev": "BaseBdev1", 00:33:22.976 "raid_bdev": "raid_bdev1", 00:33:22.976 "method": "bdev_raid_add_base_bdev", 00:33:22.976 "req_id": 1 00:33:22.976 } 00:33:22.976 Got JSON-RPC error response 00:33:22.976 response: 00:33:22.976 { 00:33:22.976 "code": -22, 00:33:22.976 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:22.976 } 00:33:22.976 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:22.976 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:33:22.976 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.976 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.976 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.976 07:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:23.913 07:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:23.913 "name": "raid_bdev1", 00:33:23.913 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:23.913 "strip_size_kb": 0, 00:33:23.913 "state": "online", 00:33:23.913 "raid_level": "raid1", 00:33:23.913 "superblock": true, 00:33:23.913 "num_base_bdevs": 2, 00:33:23.913 "num_base_bdevs_discovered": 1, 00:33:23.913 "num_base_bdevs_operational": 1, 00:33:23.913 "base_bdevs_list": [ 00:33:23.913 { 00:33:23.913 "name": null, 00:33:23.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.913 "is_configured": false, 00:33:23.913 "data_offset": 0, 00:33:23.913 "data_size": 7936 00:33:23.913 }, 00:33:23.913 { 00:33:23.913 "name": "BaseBdev2", 00:33:23.913 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:23.913 "is_configured": true, 00:33:23.913 "data_offset": 256, 00:33:23.913 "data_size": 7936 00:33:23.913 } 00:33:23.913 ] 00:33:23.913 }' 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:23.913 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:24.481 "name": "raid_bdev1", 00:33:24.481 "uuid": "fb54fc7e-42d8-4aaf-82fb-11bfa5e0ba39", 00:33:24.481 "strip_size_kb": 0, 00:33:24.481 "state": "online", 00:33:24.481 "raid_level": "raid1", 00:33:24.481 "superblock": true, 00:33:24.481 "num_base_bdevs": 2, 00:33:24.481 "num_base_bdevs_discovered": 1, 00:33:24.481 "num_base_bdevs_operational": 1, 00:33:24.481 "base_bdevs_list": [ 00:33:24.481 { 00:33:24.481 "name": null, 00:33:24.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.481 "is_configured": false, 00:33:24.481 "data_offset": 0, 00:33:24.481 "data_size": 7936 00:33:24.481 }, 00:33:24.481 { 00:33:24.481 "name": "BaseBdev2", 00:33:24.481 "uuid": "54942a82-6134-5e22-aca6-4c407e8accad", 00:33:24.481 "is_configured": true, 00:33:24.481 "data_offset": 256, 00:33:24.481 "data_size": 7936 00:33:24.481 } 00:33:24.481 ] 00:33:24.481 }' 00:33:24.481 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89674 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89674 ']' 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89674 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89674 00:33:24.482 killing process with pid 89674 00:33:24.482 Received shutdown signal, test time was about 60.000000 seconds 00:33:24.482 00:33:24.482 Latency(us) 00:33:24.482 [2024-11-20T07:30:48.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.482 [2024-11-20T07:30:48.771Z] =================================================================================================================== 00:33:24.482 [2024-11-20T07:30:48.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89674' 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89674 00:33:24.482 [2024-11-20 07:30:48.665245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:24.482 07:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89674 00:33:24.482 [2024-11-20 07:30:48.665396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:24.482 [2024-11-20 07:30:48.665489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:24.482 [2024-11-20 07:30:48.665506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:24.742 [2024-11-20 07:30:48.908184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:25.679 07:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:33:25.679 00:33:25.679 real 0m18.559s 00:33:25.679 user 0m25.401s 00:33:25.679 sys 0m1.534s 00:33:25.679 ************************************ 00:33:25.679 END TEST raid_rebuild_test_sb_md_interleaved 00:33:25.679 07:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.679 07:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:25.679 ************************************ 00:33:25.679 07:30:49 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:33:25.679 07:30:49 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:33:25.680 07:30:49 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89674 ']' 00:33:25.680 07:30:49 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89674 00:33:25.680 07:30:49 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:33:25.680 ************************************ 00:33:25.680 END TEST bdev_raid 00:33:25.680 ************************************ 00:33:25.680 00:33:25.680 real 13m3.845s 00:33:25.680 user 18m28.948s 00:33:25.680 sys 1m46.728s 00:33:25.680 07:30:49 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.680 07:30:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:25.680 07:30:49 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:25.680 07:30:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:25.680 07:30:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.680 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:33:25.680 ************************************ 00:33:25.680 START TEST spdkcli_raid 00:33:25.680 ************************************ 00:33:25.680 07:30:49 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:25.939 * Looking for test storage... 00:33:25.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:25.939 07:30:50 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:25.939 07:30:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.940 07:30:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:25.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.940 --rc genhtml_branch_coverage=1 00:33:25.940 --rc genhtml_function_coverage=1 00:33:25.940 --rc genhtml_legend=1 00:33:25.940 --rc geninfo_all_blocks=1 00:33:25.940 --rc geninfo_unexecuted_blocks=1 00:33:25.940 00:33:25.940 ' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:25.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.940 --rc genhtml_branch_coverage=1 00:33:25.940 --rc genhtml_function_coverage=1 00:33:25.940 --rc genhtml_legend=1 00:33:25.940 --rc geninfo_all_blocks=1 00:33:25.940 --rc geninfo_unexecuted_blocks=1 00:33:25.940 00:33:25.940 ' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:25.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.940 --rc genhtml_branch_coverage=1 00:33:25.940 --rc genhtml_function_coverage=1 00:33:25.940 --rc genhtml_legend=1 00:33:25.940 --rc geninfo_all_blocks=1 00:33:25.940 --rc geninfo_unexecuted_blocks=1 00:33:25.940 00:33:25.940 ' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:25.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.940 --rc genhtml_branch_coverage=1 00:33:25.940 --rc genhtml_function_coverage=1 00:33:25.940 --rc genhtml_legend=1 00:33:25.940 --rc geninfo_all_blocks=1 00:33:25.940 --rc geninfo_unexecuted_blocks=1 00:33:25.940 00:33:25.940 ' 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:33:25.940 07:30:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:25.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90363 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90363 00:33:25.940 07:30:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90363 ']' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.940 07:30:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:26.199 [2024-11-20 07:30:50.305616] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:26.199 [2024-11-20 07:30:50.305808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90363 ] 00:33:26.458 [2024-11-20 07:30:50.489237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:26.458 [2024-11-20 07:30:50.613658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.458 [2024-11-20 07:30:50.613676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.396 07:30:51 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.396 07:30:51 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:33:27.396 07:30:51 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:33:27.396 07:30:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.396 07:30:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:27.396 07:30:51 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:33:27.396 07:30:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.396 07:30:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:27.396 07:30:51 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:27.396 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:27.396 ' 00:33:28.773 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:33:28.773 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:33:29.031 07:30:53 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:33:29.031 07:30:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.031 07:30:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:29.031 07:30:53 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:33:29.031 07:30:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.031 07:30:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:29.031 07:30:53 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:33:29.031 ' 00:33:30.409 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:33:30.409 07:30:54 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:33:30.409 07:30:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.409 07:30:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.409 07:30:54 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:33:30.409 07:30:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.409 07:30:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.409 07:30:54 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:33:30.409 07:30:54 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:33:30.976 07:30:55 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:33:30.976 07:30:55 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:33:30.976 07:30:55 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:33:30.976 07:30:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.976 07:30:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.976 07:30:55 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:33:30.976 07:30:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.976 07:30:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.976 07:30:55 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:33:30.976 ' 00:33:31.911 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:33:32.206 07:30:56 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:33:32.206 07:30:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.206 07:30:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:32.206 07:30:56 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:33:32.206 07:30:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.206 07:30:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:32.206 07:30:56 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:33:32.206 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:33:32.206 ' 00:33:33.594 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:33:33.594 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:33:33.594 07:30:57 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 07:30:57 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90363 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90363 ']' 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90363 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.594 07:30:57 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90363 00:33:33.852 killing process with pid 90363 00:33:33.852 07:30:57 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.852 07:30:57 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.852 07:30:57 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90363' 00:33:33.852 07:30:57 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90363 00:33:33.853 07:30:57 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90363 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90363 ']' 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90363 00:33:35.755 07:30:59 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90363 ']' 00:33:35.755 Process with pid 90363 is not found 00:33:35.755 07:30:59 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90363 00:33:35.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90363) - No such process 00:33:35.755 07:30:59 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90363 is not found' 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:35.755 07:30:59 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:35.755 ************************************ 00:33:35.755 END TEST spdkcli_raid 00:33:35.755 ************************************ 00:33:35.755 00:33:35.755 real 0m9.989s 00:33:35.755 user 0m20.678s 00:33:35.755 sys 0m1.181s 00:33:35.755 07:30:59 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.755 07:30:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:35.755 07:30:59 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:35.755 07:30:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:35.755 07:30:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.755 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:33:35.755 ************************************ 00:33:35.755 START TEST blockdev_raid5f 00:33:35.755 ************************************ 00:33:35.755 07:30:59 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:36.014 * Looking for test storage... 00:33:36.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.014 07:31:00 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.014 --rc genhtml_branch_coverage=1 00:33:36.014 --rc genhtml_function_coverage=1 00:33:36.014 --rc genhtml_legend=1 00:33:36.014 --rc geninfo_all_blocks=1 00:33:36.014 --rc geninfo_unexecuted_blocks=1 00:33:36.014 00:33:36.014 ' 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.014 --rc genhtml_branch_coverage=1 00:33:36.014 --rc genhtml_function_coverage=1 00:33:36.014 --rc genhtml_legend=1 00:33:36.014 --rc geninfo_all_blocks=1 00:33:36.014 --rc geninfo_unexecuted_blocks=1 00:33:36.014 00:33:36.014 ' 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.014 --rc genhtml_branch_coverage=1 00:33:36.014 --rc genhtml_function_coverage=1 00:33:36.014 --rc genhtml_legend=1 00:33:36.014 --rc geninfo_all_blocks=1 00:33:36.014 --rc geninfo_unexecuted_blocks=1 00:33:36.014 00:33:36.014 ' 00:33:36.014 07:31:00 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.014 --rc genhtml_branch_coverage=1 00:33:36.014 --rc genhtml_function_coverage=1 00:33:36.014 --rc genhtml_legend=1 00:33:36.014 --rc geninfo_all_blocks=1 00:33:36.014 --rc geninfo_unexecuted_blocks=1 00:33:36.014 00:33:36.014 ' 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:33:36.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:33:36.014 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90638 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:36.015 07:31:00 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90638 00:33:36.015 07:31:00 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90638 ']' 00:33:36.015 07:31:00 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.015 07:31:00 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.015 07:31:00 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.015 07:31:00 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.015 07:31:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:36.273 [2024-11-20 07:31:00.363118] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:36.273 [2024-11-20 07:31:00.363578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90638 ] 00:33:36.273 [2024-11-20 07:31:00.543668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.532 [2024-11-20 07:31:00.657369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:37.467 Malloc0 00:33:37.467 Malloc1 00:33:37.467 Malloc2 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:37.467 07:31:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.467 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:33:37.727 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "95406c71-2fc7-4f17-b65c-c2008005c401"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95406c71-2fc7-4f17-b65c-c2008005c401",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "95406c71-2fc7-4f17-b65c-c2008005c401",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "5820dd3a-1de2-43a9-b9b0-e21677dc4a65",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "f23967b2-b3db-4a29-b6ff-f33dcb6f0fe0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1669477b-2f86-4b2b-b55b-c83a6825ff80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:37.727 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:33:37.727 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:33:37.727 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:33:37.727 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:33:37.727 07:31:01 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90638 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90638 ']' 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90638 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90638 00:33:37.727 killing process with pid 90638 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90638' 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90638 00:33:37.727 07:31:01 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90638 00:33:40.259 07:31:04 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:40.259 07:31:04 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:40.259 07:31:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:40.259 07:31:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.259 07:31:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:40.259 ************************************ 00:33:40.259 START TEST bdev_hello_world 00:33:40.259 ************************************ 00:33:40.259 07:31:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:40.259 [2024-11-20 07:31:04.179839] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:40.259 [2024-11-20 07:31:04.180001] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90694 ] 00:33:40.259 [2024-11-20 07:31:04.342900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.259 [2024-11-20 07:31:04.460538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.888 [2024-11-20 07:31:05.003869] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:40.888 [2024-11-20 07:31:05.003949] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:33:40.888 [2024-11-20 07:31:05.003976] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:40.888 [2024-11-20 07:31:05.004496] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:40.888 [2024-11-20 07:31:05.004740] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:40.888 [2024-11-20 07:31:05.004771] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:40.888 [2024-11-20 07:31:05.004844] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:40.888 00:33:40.888 [2024-11-20 07:31:05.004884] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:42.265 00:33:42.265 real 0m2.085s 00:33:42.265 user 0m1.600s 00:33:42.265 sys 0m0.360s 00:33:42.265 07:31:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.265 ************************************ 00:33:42.265 END TEST bdev_hello_world 00:33:42.265 ************************************ 00:33:42.265 07:31:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:42.265 07:31:06 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:33:42.265 07:31:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:42.265 07:31:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.265 07:31:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:42.265 ************************************ 00:33:42.265 START TEST bdev_bounds 00:33:42.265 ************************************ 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:33:42.265 Process bdevio pid: 90742 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90742 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90742' 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90742 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90742 ']' 00:33:42.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.265 07:31:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:42.265 [2024-11-20 07:31:06.338643] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:42.265 [2024-11-20 07:31:06.339140] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90742 ] 00:33:42.265 [2024-11-20 07:31:06.523885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:42.523 [2024-11-20 07:31:06.655492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.523 [2024-11-20 07:31:06.655683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:42.523 [2024-11-20 07:31:06.655924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.088 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.088 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:33:43.088 07:31:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:43.347 I/O targets: 00:33:43.347 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:33:43.347 00:33:43.347 00:33:43.347 CUnit - A unit testing framework for C - Version 2.1-3 00:33:43.347 http://cunit.sourceforge.net/ 00:33:43.347 00:33:43.347 00:33:43.347 Suite: bdevio tests on: raid5f 00:33:43.347 Test: blockdev write read block ...passed 00:33:43.347 Test: blockdev write zeroes read block ...passed 00:33:43.347 Test: blockdev write zeroes read no split ...passed 00:33:43.347 Test: blockdev write zeroes read split ...passed 00:33:43.605 Test: blockdev write zeroes read split partial ...passed 00:33:43.605 Test: blockdev reset ...passed 00:33:43.605 Test: blockdev write read 8 blocks ...passed 00:33:43.605 Test: blockdev write read size > 128k ...passed 00:33:43.605 Test: blockdev write read invalid size ...passed 00:33:43.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:43.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:43.605 Test: blockdev write read max offset ...passed 00:33:43.605 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:43.605 Test: blockdev writev readv 8 blocks ...passed 00:33:43.605 Test: blockdev writev readv 30 x 1block ...passed 00:33:43.605 Test: blockdev writev readv block ...passed 00:33:43.605 Test: blockdev writev readv size > 128k ...passed 00:33:43.605 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:43.605 Test: blockdev comparev and writev ...passed 00:33:43.605 Test: blockdev nvme passthru rw ...passed 00:33:43.605 Test: blockdev nvme passthru vendor specific ...passed 00:33:43.605 Test: blockdev nvme admin passthru ...passed 00:33:43.605 Test: blockdev copy ...passed 00:33:43.605 00:33:43.605 Run Summary: Type Total Ran Passed Failed Inactive 00:33:43.605 suites 1 1 n/a 0 0 00:33:43.605 tests 23 23 23 0 0 00:33:43.605 asserts 130 130 130 0 n/a 00:33:43.605 00:33:43.605 Elapsed time = 0.617 seconds 00:33:43.605 0 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90742 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90742 ']' 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90742 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90742 00:33:43.605 killing process with pid 90742 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90742' 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90742 00:33:43.605 07:31:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90742 00:33:44.979 07:31:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:33:44.979 00:33:44.979 real 0m2.751s 00:33:44.979 user 0m6.762s 00:33:44.979 sys 0m0.508s 00:33:44.979 ************************************ 00:33:44.979 END TEST bdev_bounds 00:33:44.979 ************************************ 00:33:44.979 07:31:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.979 07:31:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:44.979 07:31:09 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:44.979 07:31:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:44.979 07:31:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.979 07:31:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:44.979 ************************************ 00:33:44.979 START TEST bdev_nbd 00:33:44.979 ************************************ 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90797 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90797 /var/tmp/spdk-nbd.sock 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90797 ']' 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:44.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.979 07:31:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:44.979 [2024-11-20 07:31:09.167782] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:44.979 [2024-11-20 07:31:09.168258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.238 [2024-11-20 07:31:09.349890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.238 [2024-11-20 07:31:09.465341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:45.805 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:46.063 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:46.063 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:46.063 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:46.063 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:46.322 1+0 records in 00:33:46.322 1+0 records out 00:33:46.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338113 s, 12.1 MB/s 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:46.322 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:46.581 { 00:33:46.581 "nbd_device": "/dev/nbd0", 00:33:46.581 "bdev_name": "raid5f" 00:33:46.581 } 00:33:46.581 ]' 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:46.581 { 00:33:46.581 "nbd_device": "/dev/nbd0", 00:33:46.581 "bdev_name": "raid5f" 00:33:46.581 } 00:33:46.581 ]' 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:46.581 07:31:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:46.839 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:46.840 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:46.840 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:47.098 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:47.099 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:33:47.357 /dev/nbd0 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:47.357 1+0 records in 00:33:47.357 1+0 records out 00:33:47.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025074 s, 16.3 MB/s 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:47.357 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:47.616 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:47.616 { 00:33:47.616 "nbd_device": "/dev/nbd0", 00:33:47.616 "bdev_name": "raid5f" 00:33:47.616 } 00:33:47.616 ]' 00:33:47.616 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:47.616 { 00:33:47.616 "nbd_device": "/dev/nbd0", 00:33:47.616 "bdev_name": "raid5f" 00:33:47.616 } 00:33:47.616 ]' 00:33:47.616 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:47.875 256+0 records in 00:33:47.875 256+0 records out 00:33:47.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010537 s, 99.5 MB/s 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:47.875 256+0 records in 00:33:47.875 256+0 records out 00:33:47.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0401102 s, 26.1 MB/s 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:47.875 07:31:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.134 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:48.409 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:48.409 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:48.409 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:48.409 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:48.409 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:48.409 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:33:48.410 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:48.695 malloc_lvol_verify 00:33:48.695 07:31:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:48.954 3b4e5678-bd1c-4103-9f35-50b06e5b8b19 00:33:48.954 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:49.213 9a5e103e-4df3-4f6b-85e9-9f72703abba9 00:33:49.213 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:49.472 /dev/nbd0 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:33:49.472 mke2fs 1.47.0 (5-Feb-2023) 00:33:49.472 Discarding device blocks: 0/4096 done 00:33:49.472 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:49.472 00:33:49.472 Allocating group tables: 0/1 done 00:33:49.472 Writing inode tables: 0/1 done 00:33:49.472 Creating journal (1024 blocks): done 00:33:49.472 Writing superblocks and filesystem accounting information: 0/1 done 00:33:49.472 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:49.472 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90797 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90797 ']' 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90797 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90797 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90797' 00:33:49.731 killing process with pid 90797 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90797 00:33:49.731 07:31:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90797 00:33:51.106 07:31:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:33:51.106 00:33:51.106 real 0m6.123s 00:33:51.106 user 0m8.749s 00:33:51.106 sys 0m1.358s 00:33:51.106 07:31:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.106 ************************************ 00:33:51.106 END TEST bdev_nbd 00:33:51.106 07:31:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:51.106 ************************************ 00:33:51.106 07:31:15 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:33:51.106 07:31:15 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:33:51.106 07:31:15 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:33:51.107 07:31:15 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:33:51.107 07:31:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:51.107 07:31:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.107 07:31:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:51.107 ************************************ 00:33:51.107 START TEST bdev_fio 00:33:51.107 ************************************ 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:33:51.107 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:51.107 ************************************ 00:33:51.107 START TEST bdev_fio_rw_verify 00:33:51.107 ************************************ 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:51.107 07:31:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:51.366 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:51.366 fio-3.35 00:33:51.366 Starting 1 thread 00:34:03.569 00:34:03.569 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91009: Wed Nov 20 07:31:26 2024 00:34:03.569 read: IOPS=8959, BW=35.0MiB/s (36.7MB/s)(350MiB/10001msec) 00:34:03.569 slat (usec): min=20, max=236, avg=27.41, stdev= 4.57 00:34:03.569 clat (usec): min=15, max=611, avg=177.63, stdev=66.73 00:34:03.569 lat (usec): min=46, max=651, avg=205.04, stdev=67.86 00:34:03.569 clat percentiles (usec): 00:34:03.569 | 50.000th=[ 176], 99.000th=[ 310], 99.900th=[ 371], 99.990th=[ 420], 00:34:03.569 | 99.999th=[ 611] 00:34:03.569 write: IOPS=9445, BW=36.9MiB/s (38.7MB/s)(364MiB/9868msec); 0 zone resets 00:34:03.569 slat (usec): min=11, max=234, avg=22.24, stdev= 5.44 00:34:03.569 clat (usec): min=76, max=1429, avg=407.30, stdev=65.63 00:34:03.569 lat (usec): min=96, max=1454, avg=429.55, stdev=68.19 00:34:03.569 clat percentiles (usec): 00:34:03.569 | 50.000th=[ 408], 99.000th=[ 603], 99.900th=[ 881], 99.990th=[ 1090], 00:34:03.569 | 99.999th=[ 1434] 00:34:03.569 bw ( KiB/s): min=31824, max=45528, per=98.08%, avg=37055.16, stdev=3067.32, samples=19 00:34:03.569 iops : min= 7956, max=11382, avg=9263.79, stdev=766.83, samples=19 00:34:03.569 lat (usec) : 20=0.01%, 50=0.01%, 100=7.08%, 250=33.48%, 500=57.54% 00:34:03.569 lat (usec) : 750=1.71%, 1000=0.19% 00:34:03.569 lat (msec) : 2=0.01% 00:34:03.569 cpu : usr=98.75%, sys=0.42%, ctx=31, majf=0, minf=7737 00:34:03.569 IO depths : 1=7.6%, 2=19.6%, 4=55.4%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.569 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.569 issued rwts: total=89607,93208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.569 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:03.569 00:34:03.569 Run status group 0 (all jobs): 00:34:03.569 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=350MiB (367MB), run=10001-10001msec 00:34:03.569 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=364MiB (382MB), run=9868-9868msec 00:34:03.828 ----------------------------------------------------- 00:34:03.828 Suppressions used: 00:34:03.828 count bytes template 00:34:03.828 1 7 /usr/src/fio/parse.c 00:34:03.828 807 77472 /usr/src/fio/iolog.c 00:34:03.828 1 8 libtcmalloc_minimal.so 00:34:03.828 1 904 libcrypto.so 00:34:03.828 ----------------------------------------------------- 00:34:03.828 00:34:03.828 00:34:03.828 real 0m12.718s 00:34:03.828 user 0m13.050s 00:34:03.828 sys 0m0.907s 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:03.828 ************************************ 00:34:03.828 END TEST bdev_fio_rw_verify 00:34:03.828 ************************************ 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "95406c71-2fc7-4f17-b65c-c2008005c401"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "95406c71-2fc7-4f17-b65c-c2008005c401",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "95406c71-2fc7-4f17-b65c-c2008005c401",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "5820dd3a-1de2-43a9-b9b0-e21677dc4a65",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "f23967b2-b3db-4a29-b6ff-f33dcb6f0fe0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1669477b-2f86-4b2b-b55b-c83a6825ff80",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:34:03.828 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:34:04.195 /home/vagrant/spdk_repo/spdk 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:34:04.195 00:34:04.195 real 0m12.935s 00:34:04.195 user 0m13.150s 00:34:04.195 sys 0m0.999s 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.195 07:31:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:04.195 ************************************ 00:34:04.195 END TEST bdev_fio 00:34:04.195 ************************************ 00:34:04.195 07:31:28 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:04.195 07:31:28 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:04.195 07:31:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:34:04.195 07:31:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.195 07:31:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.195 ************************************ 00:34:04.195 START TEST bdev_verify 00:34:04.195 ************************************ 00:34:04.195 07:31:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:04.195 [2024-11-20 07:31:28.289658] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:04.195 [2024-11-20 07:31:28.289815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91169 ] 00:34:04.453 [2024-11-20 07:31:28.469414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:04.453 [2024-11-20 07:31:28.616697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.453 [2024-11-20 07:31:28.616712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.021 Running I/O for 5 seconds... 00:34:07.336 10271.00 IOPS, 40.12 MiB/s [2024-11-20T07:31:32.563Z] 10337.50 IOPS, 40.38 MiB/s [2024-11-20T07:31:33.496Z] 10316.67 IOPS, 40.30 MiB/s [2024-11-20T07:31:34.431Z] 10366.75 IOPS, 40.50 MiB/s [2024-11-20T07:31:34.431Z] 10437.20 IOPS, 40.77 MiB/s 00:34:10.142 Latency(us) 00:34:10.142 [2024-11-20T07:31:34.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.142 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:10.142 Verification LBA range: start 0x0 length 0x2000 00:34:10.142 raid5f : 5.02 5224.68 20.41 0.00 0.00 37033.30 253.21 30980.65 00:34:10.142 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:10.142 Verification LBA range: start 0x2000 length 0x2000 00:34:10.142 raid5f : 5.01 5210.44 20.35 0.00 0.00 37063.33 307.20 32172.22 00:34:10.142 [2024-11-20T07:31:34.431Z] =================================================================================================================== 00:34:10.142 [2024-11-20T07:31:34.431Z] Total : 10435.12 40.76 0.00 0.00 37048.28 253.21 32172.22 00:34:11.517 00:34:11.517 real 0m7.362s 00:34:11.517 user 0m13.433s 00:34:11.517 sys 0m0.385s 00:34:11.517 07:31:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.518 07:31:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:11.518 ************************************ 00:34:11.518 END TEST bdev_verify 00:34:11.518 ************************************ 00:34:11.518 07:31:35 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:11.518 07:31:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:34:11.518 07:31:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.518 07:31:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:11.518 ************************************ 00:34:11.518 START TEST bdev_verify_big_io 00:34:11.518 ************************************ 00:34:11.518 07:31:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:11.518 [2024-11-20 07:31:35.716127] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:11.518 [2024-11-20 07:31:35.716318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91264 ] 00:34:11.776 [2024-11-20 07:31:35.893459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:11.776 [2024-11-20 07:31:36.038178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.776 [2024-11-20 07:31:36.038180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.344 Running I/O for 5 seconds... 00:34:14.657 506.00 IOPS, 31.62 MiB/s [2024-11-20T07:31:39.881Z] 569.50 IOPS, 35.59 MiB/s [2024-11-20T07:31:40.816Z] 612.67 IOPS, 38.29 MiB/s [2024-11-20T07:31:41.752Z] 634.50 IOPS, 39.66 MiB/s [2024-11-20T07:31:42.011Z] 685.40 IOPS, 42.84 MiB/s 00:34:17.722 Latency(us) 00:34:17.722 [2024-11-20T07:31:42.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.722 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:17.722 Verification LBA range: start 0x0 length 0x200 00:34:17.722 raid5f : 5.32 357.95 22.37 0.00 0.00 8951745.41 247.62 461373.44 00:34:17.722 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:17.722 Verification LBA range: start 0x200 length 0x200 00:34:17.722 raid5f : 5.27 337.52 21.09 0.00 0.00 9484502.72 203.87 440401.92 00:34:17.722 [2024-11-20T07:31:42.011Z] =================================================================================================================== 00:34:17.722 [2024-11-20T07:31:42.011Z] Total : 695.47 43.47 0.00 0.00 9209078.33 203.87 461373.44 00:34:19.098 00:34:19.098 real 0m7.524s 00:34:19.098 user 0m13.776s 00:34:19.098 sys 0m0.385s 00:34:19.098 07:31:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.098 07:31:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:19.098 ************************************ 00:34:19.098 END TEST bdev_verify_big_io 00:34:19.098 ************************************ 00:34:19.098 07:31:43 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:19.098 07:31:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:19.098 07:31:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.098 07:31:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:19.098 ************************************ 00:34:19.098 START TEST bdev_write_zeroes 00:34:19.098 ************************************ 00:34:19.098 07:31:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:19.098 [2024-11-20 07:31:43.297746] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:19.098 [2024-11-20 07:31:43.298012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91363 ] 00:34:19.357 [2024-11-20 07:31:43.478913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.357 [2024-11-20 07:31:43.609451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.925 Running I/O for 1 seconds... 00:34:21.119 23679.00 IOPS, 92.50 MiB/s 00:34:21.119 Latency(us) 00:34:21.119 [2024-11-20T07:31:45.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.119 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:21.119 raid5f : 1.01 23635.48 92.33 0.00 0.00 5395.54 1899.05 7864.32 00:34:21.119 [2024-11-20T07:31:45.408Z] =================================================================================================================== 00:34:21.119 [2024-11-20T07:31:45.408Z] Total : 23635.48 92.33 0.00 0.00 5395.54 1899.05 7864.32 00:34:22.519 00:34:22.519 real 0m3.212s 00:34:22.519 user 0m2.728s 00:34:22.519 sys 0m0.353s 00:34:22.519 07:31:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.519 07:31:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 ************************************ 00:34:22.519 END TEST bdev_write_zeroes 00:34:22.519 ************************************ 00:34:22.519 07:31:46 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:22.519 07:31:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:22.519 07:31:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.519 07:31:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 ************************************ 00:34:22.519 START TEST bdev_json_nonenclosed 00:34:22.519 ************************************ 00:34:22.519 07:31:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:22.519 [2024-11-20 07:31:46.565717] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:22.519 [2024-11-20 07:31:46.566238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91417 ] 00:34:22.519 [2024-11-20 07:31:46.753778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.779 [2024-11-20 07:31:46.880317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.779 [2024-11-20 07:31:46.880509] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:22.779 [2024-11-20 07:31:46.880551] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:22.779 [2024-11-20 07:31:46.880566] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:23.038 00:34:23.038 real 0m0.673s 00:34:23.038 user 0m0.415s 00:34:23.038 sys 0m0.152s 00:34:23.038 07:31:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.038 07:31:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:23.038 ************************************ 00:34:23.038 END TEST bdev_json_nonenclosed 00:34:23.038 ************************************ 00:34:23.038 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:23.038 07:31:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:23.038 07:31:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.038 07:31:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:23.038 ************************************ 00:34:23.038 START TEST bdev_json_nonarray 00:34:23.038 ************************************ 00:34:23.038 07:31:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:23.038 [2024-11-20 07:31:47.294213] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:23.038 [2024-11-20 07:31:47.294398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91447 ] 00:34:23.297 [2024-11-20 07:31:47.478512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.556 [2024-11-20 07:31:47.620655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.556 [2024-11-20 07:31:47.620817] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:23.556 [2024-11-20 07:31:47.620866] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:23.556 [2024-11-20 07:31:47.620943] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:23.815 00:34:23.815 real 0m0.701s 00:34:23.815 user 0m0.449s 00:34:23.815 sys 0m0.147s 00:34:23.815 07:31:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.815 07:31:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:23.815 ************************************ 00:34:23.815 END TEST bdev_json_nonarray 00:34:23.815 ************************************ 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:34:23.815 07:31:47 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:34:23.815 00:34:23.815 real 0m47.939s 00:34:23.815 user 1m5.132s 00:34:23.815 sys 0m5.744s 00:34:23.815 07:31:47 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.815 07:31:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:23.815 ************************************ 00:34:23.815 END TEST blockdev_raid5f 00:34:23.815 ************************************ 00:34:23.815 07:31:47 -- spdk/autotest.sh@194 -- # uname -s 00:34:23.815 07:31:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:34:23.815 07:31:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:23.815 07:31:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:23.815 07:31:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:34:23.815 07:31:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.815 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:34:23.815 07:31:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:23.815 07:31:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:23.815 07:31:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:23.815 07:31:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:23.815 07:31:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:23.815 07:31:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:23.815 07:31:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:23.815 07:31:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.815 07:31:48 -- common/autotest_common.sh@10 -- # set +x 00:34:23.815 07:31:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:23.815 07:31:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:23.815 07:31:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:23.815 07:31:48 -- common/autotest_common.sh@10 -- # set +x 00:34:25.720 INFO: APP EXITING 00:34:25.720 INFO: killing all VMs 00:34:25.720 INFO: killing vhost app 00:34:25.720 INFO: EXIT DONE 00:34:25.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:25.978 Waiting for block devices as requested 00:34:25.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:25.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:26.916 Cleaning 00:34:26.916 Removing: /var/run/dpdk/spdk0/config 00:34:26.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:26.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:26.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:26.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:26.916 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:26.916 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:26.916 Removing: /dev/shm/spdk_tgt_trace.pid57009 00:34:26.916 Removing: /var/run/dpdk/spdk0 00:34:26.916 Removing: /var/run/dpdk/spdk_pid56774 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57009 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57239 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57348 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57397 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57532 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57550 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57760 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57866 00:34:26.916 Removing: /var/run/dpdk/spdk_pid57974 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58106 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58214 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58248 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58290 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58366 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58472 00:34:26.916 Removing: /var/run/dpdk/spdk_pid58947 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59024 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59098 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59114 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59265 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59287 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59435 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59451 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59526 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59544 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59608 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59626 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59832 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59869 00:34:26.916 Removing: /var/run/dpdk/spdk_pid59958 00:34:26.916 Removing: /var/run/dpdk/spdk_pid61357 00:34:26.916 Removing: /var/run/dpdk/spdk_pid61574 00:34:26.916 Removing: /var/run/dpdk/spdk_pid61720 00:34:26.916 Removing: /var/run/dpdk/spdk_pid62374 00:34:26.916 Removing: /var/run/dpdk/spdk_pid62591 00:34:26.916 Removing: /var/run/dpdk/spdk_pid62737 00:34:26.916 Removing: /var/run/dpdk/spdk_pid63391 00:34:26.916 Removing: /var/run/dpdk/spdk_pid63727 00:34:26.916 Removing: /var/run/dpdk/spdk_pid63867 00:34:26.916 Removing: /var/run/dpdk/spdk_pid65285 00:34:26.916 Removing: /var/run/dpdk/spdk_pid65544 00:34:26.916 Removing: /var/run/dpdk/spdk_pid65697 00:34:26.916 Removing: /var/run/dpdk/spdk_pid67111 00:34:26.916 Removing: /var/run/dpdk/spdk_pid67364 00:34:26.916 Removing: /var/run/dpdk/spdk_pid67515 00:34:26.916 Removing: /var/run/dpdk/spdk_pid68923 00:34:26.916 Removing: /var/run/dpdk/spdk_pid69375 00:34:26.916 Removing: /var/run/dpdk/spdk_pid69527 00:34:26.916 Removing: /var/run/dpdk/spdk_pid71039 00:34:26.916 Removing: /var/run/dpdk/spdk_pid71310 00:34:27.175 Removing: /var/run/dpdk/spdk_pid71456 00:34:27.175 Removing: /var/run/dpdk/spdk_pid72969 00:34:27.175 Removing: /var/run/dpdk/spdk_pid73235 00:34:27.175 Removing: /var/run/dpdk/spdk_pid73376 00:34:27.175 Removing: /var/run/dpdk/spdk_pid74897 00:34:27.175 Removing: /var/run/dpdk/spdk_pid75395 00:34:27.175 Removing: /var/run/dpdk/spdk_pid75542 00:34:27.175 Removing: /var/run/dpdk/spdk_pid75686 00:34:27.175 Removing: /var/run/dpdk/spdk_pid76138 00:34:27.175 Removing: /var/run/dpdk/spdk_pid76902 00:34:27.175 Removing: /var/run/dpdk/spdk_pid77289 00:34:27.175 Removing: /var/run/dpdk/spdk_pid78008 00:34:27.175 Removing: /var/run/dpdk/spdk_pid78488 00:34:27.175 Removing: /var/run/dpdk/spdk_pid79274 00:34:27.175 Removing: /var/run/dpdk/spdk_pid79690 00:34:27.175 Removing: /var/run/dpdk/spdk_pid81687 00:34:27.175 Removing: /var/run/dpdk/spdk_pid82140 00:34:27.175 Removing: /var/run/dpdk/spdk_pid82580 00:34:27.175 Removing: /var/run/dpdk/spdk_pid84706 00:34:27.175 Removing: /var/run/dpdk/spdk_pid85203 00:34:27.175 Removing: /var/run/dpdk/spdk_pid85707 00:34:27.175 Removing: /var/run/dpdk/spdk_pid86782 00:34:27.175 Removing: /var/run/dpdk/spdk_pid87109 00:34:27.175 Removing: /var/run/dpdk/spdk_pid88067 00:34:27.175 Removing: /var/run/dpdk/spdk_pid88391 00:34:27.175 Removing: /var/run/dpdk/spdk_pid89350 00:34:27.175 Removing: /var/run/dpdk/spdk_pid89674 00:34:27.175 Removing: /var/run/dpdk/spdk_pid90363 00:34:27.175 Removing: /var/run/dpdk/spdk_pid90638 00:34:27.175 Removing: /var/run/dpdk/spdk_pid90694 00:34:27.175 Removing: /var/run/dpdk/spdk_pid90742 00:34:27.175 Removing: /var/run/dpdk/spdk_pid90993 00:34:27.175 Removing: /var/run/dpdk/spdk_pid91169 00:34:27.175 Removing: /var/run/dpdk/spdk_pid91264 00:34:27.175 Removing: /var/run/dpdk/spdk_pid91363 00:34:27.175 Removing: /var/run/dpdk/spdk_pid91417 00:34:27.175 Removing: /var/run/dpdk/spdk_pid91447 00:34:27.175 Clean 00:34:27.175 07:31:51 -- common/autotest_common.sh@1453 -- # return 0 00:34:27.175 07:31:51 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:27.175 07:31:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.175 07:31:51 -- common/autotest_common.sh@10 -- # set +x 00:34:27.175 07:31:51 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:27.175 07:31:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.175 07:31:51 -- common/autotest_common.sh@10 -- # set +x 00:34:27.434 07:31:51 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:27.434 07:31:51 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:27.434 07:31:51 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:27.434 07:31:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:27.434 07:31:51 -- spdk/autotest.sh@398 -- # hostname 00:34:27.434 07:31:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:27.434 geninfo: WARNING: invalid characters removed from testname! 00:34:49.409 07:32:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:51.944 07:32:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:54.479 07:32:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:57.018 07:32:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:58.923 07:32:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.454 07:32:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.984 07:32:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:03.984 07:32:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:03.984 07:32:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:03.984 07:32:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:03.984 07:32:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:03.984 07:32:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:03.984 + [[ -n 5374 ]] 00:35:03.984 + sudo kill 5374 00:35:03.993 [Pipeline] } 00:35:04.012 [Pipeline] // timeout 00:35:04.019 [Pipeline] } 00:35:04.035 [Pipeline] // stage 00:35:04.042 [Pipeline] } 00:35:04.059 [Pipeline] // catchError 00:35:04.070 [Pipeline] stage 00:35:04.072 [Pipeline] { (Stop VM) 00:35:04.086 [Pipeline] sh 00:35:04.369 + vagrant halt 00:35:07.655 ==> default: Halting domain... 00:35:14.237 [Pipeline] sh 00:35:14.580 + vagrant destroy -f 00:35:17.866 ==> default: Removing domain... 00:35:17.877 [Pipeline] sh 00:35:18.157 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:35:18.166 [Pipeline] } 00:35:18.183 [Pipeline] // stage 00:35:18.189 [Pipeline] } 00:35:18.203 [Pipeline] // dir 00:35:18.210 [Pipeline] } 00:35:18.226 [Pipeline] // wrap 00:35:18.233 [Pipeline] } 00:35:18.245 [Pipeline] // catchError 00:35:18.255 [Pipeline] stage 00:35:18.257 [Pipeline] { (Epilogue) 00:35:18.270 [Pipeline] sh 00:35:18.551 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:23.893 [Pipeline] catchError 00:35:23.895 [Pipeline] { 00:35:23.908 [Pipeline] sh 00:35:24.189 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:24.189 Artifacts sizes are good 00:35:24.198 [Pipeline] } 00:35:24.213 [Pipeline] // catchError 00:35:24.225 [Pipeline] archiveArtifacts 00:35:24.232 Archiving artifacts 00:35:24.328 [Pipeline] cleanWs 00:35:24.340 [WS-CLEANUP] Deleting project workspace... 00:35:24.340 [WS-CLEANUP] Deferred wipeout is used... 00:35:24.346 [WS-CLEANUP] done 00:35:24.348 [Pipeline] } 00:35:24.367 [Pipeline] // stage 00:35:24.372 [Pipeline] } 00:35:24.388 [Pipeline] // node 00:35:24.395 [Pipeline] End of Pipeline 00:35:24.448 Finished: SUCCESS